Test Report: QEMU_macOS 19690

                    
                      f8db61c9b74e1fc8d4208c01add19855c5953b45:2024-09-23:36339
                    
                

Test fail (156/258)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 13.42
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 10.35
27 TestAddons/Setup 10.73
28 TestCertOptions 10.14
29 TestCertExpiration 197.8
30 TestDockerFlags 12.4
31 TestForceSystemdFlag 10.92
32 TestForceSystemdEnv 10.2
38 TestErrorSpam/setup 9.89
47 TestFunctional/serial/StartWithProxy 9.95
49 TestFunctional/serial/SoftStart 5.26
50 TestFunctional/serial/KubeContext 0.06
51 TestFunctional/serial/KubectlGetPods 0.06
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
59 TestFunctional/serial/CacheCmd/cache/cache_reload 0.16
61 TestFunctional/serial/MinikubeKubectlCmd 0.78
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.04
63 TestFunctional/serial/ExtraConfig 5.25
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 0.08
66 TestFunctional/serial/LogsFileCmd 0.07
67 TestFunctional/serial/InvalidService 0.03
70 TestFunctional/parallel/DashboardCmd 0.2
73 TestFunctional/parallel/StatusCmd 0.12
77 TestFunctional/parallel/ServiceCmdConnect 0.14
79 TestFunctional/parallel/PersistentVolumeClaim 0.03
81 TestFunctional/parallel/SSHCmd 0.12
82 TestFunctional/parallel/CpCmd 0.27
84 TestFunctional/parallel/FileSync 0.08
85 TestFunctional/parallel/CertSync 0.28
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
95 TestFunctional/parallel/Version/components 0.04
96 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
97 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
98 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
99 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
100 TestFunctional/parallel/ImageCommands/ImageBuild 0.12
102 TestFunctional/parallel/DockerEnv/bash 0.05
103 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
104 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
105 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
106 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
107 TestFunctional/parallel/ServiceCmd/List 0.04
108 TestFunctional/parallel/ServiceCmd/JSONOutput 0.05
109 TestFunctional/parallel/ServiceCmd/HTTPS 0.04
110 TestFunctional/parallel/ServiceCmd/Format 0.04
111 TestFunctional/parallel/ServiceCmd/URL 0.04
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 107.62
118 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.31
119 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.28
120 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.12
121 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
123 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.07
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 30.05
141 TestMultiControlPlane/serial/StartCluster 9.91
142 TestMultiControlPlane/serial/DeployApp 109.83
143 TestMultiControlPlane/serial/PingHostFromPods 0.09
144 TestMultiControlPlane/serial/AddWorkerNode 0.07
145 TestMultiControlPlane/serial/NodeLabels 0.06
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.08
147 TestMultiControlPlane/serial/CopyFile 0.06
148 TestMultiControlPlane/serial/StopSecondaryNode 0.11
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.08
150 TestMultiControlPlane/serial/RestartSecondaryNode 60.13
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.08
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 8.55
153 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
155 TestMultiControlPlane/serial/StopCluster 3.62
156 TestMultiControlPlane/serial/RestartCluster 5.25
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
158 TestMultiControlPlane/serial/AddSecondaryNode 0.07
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.08
162 TestImageBuild/serial/Setup 9.94
165 TestJSONOutput/start/Command 9.91
171 TestJSONOutput/pause/Command 0.08
177 TestJSONOutput/unpause/Command 0.05
194 TestMinikubeProfile 10.14
197 TestMountStart/serial/StartWithMountFirst 10.07
200 TestMultiNode/serial/FreshStart2Nodes 10.08
201 TestMultiNode/serial/DeployApp2Nodes 97.59
202 TestMultiNode/serial/PingHostFrom2Pods 0.09
203 TestMultiNode/serial/AddNode 0.07
204 TestMultiNode/serial/MultiNodeLabels 0.06
205 TestMultiNode/serial/ProfileList 0.08
206 TestMultiNode/serial/CopyFile 0.06
207 TestMultiNode/serial/StopNode 0.14
208 TestMultiNode/serial/StartAfterStop 57.15
209 TestMultiNode/serial/RestartKeepsNodes 8.85
210 TestMultiNode/serial/DeleteNode 0.1
211 TestMultiNode/serial/StopMultiNode 2.02
212 TestMultiNode/serial/RestartMultiNode 5.26
213 TestMultiNode/serial/ValidateNameConflict 20.47
217 TestPreload 10.07
219 TestScheduledStopUnix 10.08
220 TestSkaffold 13.44
223 TestRunningBinaryUpgrade 629.02
225 TestKubernetesUpgrade 19.26
239 TestStoppedBinaryUpgrade/Upgrade 585.78
249 TestPause/serial/Start 10.43
252 TestNoKubernetes/serial/StartWithK8s 10.04
253 TestNoKubernetes/serial/StartWithStopK8s 7.5
254 TestNoKubernetes/serial/Start 7.5
255 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.46
256 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.4
260 TestNoKubernetes/serial/StartNoArgs 5.37
262 TestNetworkPlugins/group/auto/Start 10
263 TestNetworkPlugins/group/kindnet/Start 9.89
264 TestNetworkPlugins/group/flannel/Start 9.78
265 TestNetworkPlugins/group/enable-default-cni/Start 9.85
266 TestNetworkPlugins/group/bridge/Start 10.05
267 TestNetworkPlugins/group/kubenet/Start 9.85
268 TestNetworkPlugins/group/custom-flannel/Start 9.8
269 TestNetworkPlugins/group/calico/Start 9.82
270 TestNetworkPlugins/group/false/Start 9.85
272 TestStartStop/group/old-k8s-version/serial/FirstStart 10.08
273 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
274 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
277 TestStartStop/group/old-k8s-version/serial/SecondStart 5.27
278 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
279 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
280 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
281 TestStartStop/group/old-k8s-version/serial/Pause 0.11
283 TestStartStop/group/no-preload/serial/FirstStart 9.85
284 TestStartStop/group/no-preload/serial/DeployApp 0.09
285 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
288 TestStartStop/group/no-preload/serial/SecondStart 5.25
289 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
290 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
291 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
292 TestStartStop/group/no-preload/serial/Pause 0.11
294 TestStartStop/group/embed-certs/serial/FirstStart 10
295 TestStartStop/group/embed-certs/serial/DeployApp 0.09
296 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
299 TestStartStop/group/embed-certs/serial/SecondStart 5.2
301 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.97
302 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
303 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
304 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.08
305 TestStartStop/group/embed-certs/serial/Pause 0.14
307 TestStartStop/group/newest-cni/serial/FirstStart 11.85
308 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.14
315 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.25
317 TestStartStop/group/newest-cni/serial/SecondStart 5.22
318 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
319 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
320 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
321 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
324 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
325 TestStartStop/group/newest-cni/serial/Pause 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (13.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-294000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-294000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (13.416131459s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8e513c9d-f7f1-45c8-ba80-32b2065bbbe8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-294000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c0bff66d-ba4c-4173-9675-8b938e35bf35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19690"}}
	{"specversion":"1.0","id":"a8db07ac-5a52-4cef-bfbf-9e7dca462276","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig"}}
	{"specversion":"1.0","id":"01edfea6-6839-4b2f-818f-ca86004a7110","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"0b2dcfa8-84b5-43b7-9370-234b8309a2ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"975010e2-389d-48a5-9a55-cb2437ca046f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube"}}
	{"specversion":"1.0","id":"fb83ef9c-5dc1-49d1-93b1-1ddc18640df8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"c5966e1d-a4cd-47e6-87f7-1dcf16cab772","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"d2be1014-a80e-4bad-88d4-1d77127fa758","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"09e4fe4a-f7de-4f25-84a2-14a8a06813b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"325ab214-84a3-4169-b45e-7508bc94ee63","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-294000\" primary control-plane node in \"download-only-294000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"109290b9-8bf2-4e2d-9abc-1e3c052af542","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ae03001b-b5af-41f2-b23a-4daf5c6aae75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19690-18362/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10890d6c0 0x10890d6c0 0x10890d6c0 0x10890d6c0 0x10890d6c0 0x10890d6c0 0x10890d6c0] Decompressors:map[bz2:0x14000759950 gz:0x14000759958 tar:0x14000759880 tar.bz2:0x14000759890 tar.gz:0x140007598d0 tar.xz:0x14000759920 tar.zst:0x14000759930 tbz2:0x14000759890 tgz:0x1
40007598d0 txz:0x14000759920 tzst:0x14000759930 xz:0x14000759960 zip:0x14000759970 zst:0x14000759968] Getters:map[file:0x14001a62590 http:0x14000884140 https:0x14000884190] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"54921a21-1106-4bfb-9ecd-e92605fa73d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:16:12.070127   18917 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:16:12.070287   18917 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:16:12.070290   18917 out.go:358] Setting ErrFile to fd 2...
	I0923 04:16:12.070293   18917 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:16:12.070426   18917 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	W0923 04:16:12.070513   18917 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19690-18362/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19690-18362/.minikube/config/config.json: no such file or directory
	I0923 04:16:12.071767   18917 out.go:352] Setting JSON to true
	I0923 04:16:12.088156   18917 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8143,"bootTime":1727082029,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:16:12.088215   18917 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:16:12.092700   18917 out.go:97] [download-only-294000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:16:12.092892   18917 notify.go:220] Checking for updates...
	W0923 04:16:12.092900   18917 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 04:16:12.097643   18917 out.go:169] MINIKUBE_LOCATION=19690
	I0923 04:16:12.101651   18917 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:16:12.106676   18917 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:16:12.110663   18917 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:16:12.113612   18917 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	W0923 04:16:12.119650   18917 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 04:16:12.119857   18917 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:16:12.122651   18917 out.go:97] Using the qemu2 driver based on user configuration
	I0923 04:16:12.122672   18917 start.go:297] selected driver: qemu2
	I0923 04:16:12.122676   18917 start.go:901] validating driver "qemu2" against <nil>
	I0923 04:16:12.122761   18917 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 04:16:12.125661   18917 out.go:169] Automatically selected the socket_vmnet network
	I0923 04:16:12.130943   18917 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0923 04:16:12.131038   18917 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 04:16:12.131094   18917 cni.go:84] Creating CNI manager for ""
	I0923 04:16:12.131135   18917 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0923 04:16:12.131172   18917 start.go:340] cluster config:
	{Name:download-only-294000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-294000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:16:12.134971   18917 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:16:12.138677   18917 out.go:97] Downloading VM boot image ...
	I0923 04:16:12.138697   18917 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso
	I0923 04:16:17.764375   18917 out.go:97] Starting "download-only-294000" primary control-plane node in "download-only-294000" cluster
	I0923 04:16:17.764399   18917 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 04:16:17.820396   18917 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0923 04:16:17.820406   18917 cache.go:56] Caching tarball of preloaded images
	I0923 04:16:17.820581   18917 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 04:16:17.825322   18917 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0923 04:16:17.825329   18917 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0923 04:16:17.908742   18917 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0923 04:16:24.192303   18917 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0923 04:16:24.192495   18917 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0923 04:16:24.888677   18917 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0923 04:16:24.888900   18917 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/download-only-294000/config.json ...
	I0923 04:16:24.888920   18917 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/download-only-294000/config.json: {Name:mk4bb948808b67e8544bc89978580e0632134115 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 04:16:24.889162   18917 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 04:16:24.890052   18917 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0923 04:16:25.399468   18917 out.go:193] 
	W0923 04:16:25.408490   18917 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19690-18362/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10890d6c0 0x10890d6c0 0x10890d6c0 0x10890d6c0 0x10890d6c0 0x10890d6c0 0x10890d6c0] Decompressors:map[bz2:0x14000759950 gz:0x14000759958 tar:0x14000759880 tar.bz2:0x14000759890 tar.gz:0x140007598d0 tar.xz:0x14000759920 tar.zst:0x14000759930 tbz2:0x14000759890 tgz:0x140007598d0 txz:0x14000759920 tzst:0x14000759930 xz:0x14000759960 zip:0x14000759970 zst:0x14000759968] Getters:map[file:0x14001a62590 http:0x14000884140 https:0x14000884190] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0923 04:16:25.408520   18917 out_reason.go:110] 
	W0923 04:16:25.418211   18917 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:16:25.422319   18917 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-294000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (13.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19690-18362/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.35s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-819000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-819000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (10.200750583s)

                                                
                                                
-- stdout --
	* [offline-docker-819000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-819000" primary control-plane node in "offline-docker-819000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-819000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:28:13.967634   20675 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:28:13.967789   20675 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:28:13.967792   20675 out.go:358] Setting ErrFile to fd 2...
	I0923 04:28:13.967795   20675 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:28:13.967979   20675 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:28:13.969203   20675 out.go:352] Setting JSON to false
	I0923 04:28:13.986801   20675 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8864,"bootTime":1727082029,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:28:13.986877   20675 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:28:13.992781   20675 out.go:177] * [offline-docker-819000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:28:14.000805   20675 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 04:28:14.000856   20675 notify.go:220] Checking for updates...
	I0923 04:28:14.008591   20675 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:28:14.011728   20675 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:28:14.015756   20675 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:28:14.016916   20675 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	I0923 04:28:14.019747   20675 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 04:28:14.023110   20675 config.go:182] Loaded profile config "multinode-090000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:28:14.023177   20675 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:28:14.024490   20675 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 04:28:14.031778   20675 start.go:297] selected driver: qemu2
	I0923 04:28:14.031788   20675 start.go:901] validating driver "qemu2" against <nil>
	I0923 04:28:14.031794   20675 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 04:28:14.033950   20675 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 04:28:14.037589   20675 out.go:177] * Automatically selected the socket_vmnet network
	I0923 04:28:14.040882   20675 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 04:28:14.040897   20675 cni.go:84] Creating CNI manager for ""
	I0923 04:28:14.040921   20675 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 04:28:14.040932   20675 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 04:28:14.040963   20675 start.go:340] cluster config:
	{Name:offline-docker-819000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-819000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:28:14.044855   20675 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:28:14.052692   20675 out.go:177] * Starting "offline-docker-819000" primary control-plane node in "offline-docker-819000" cluster
	I0923 04:28:14.056735   20675 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 04:28:14.056768   20675 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 04:28:14.056782   20675 cache.go:56] Caching tarball of preloaded images
	I0923 04:28:14.056871   20675 preload.go:172] Found /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 04:28:14.056877   20675 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 04:28:14.056940   20675 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/offline-docker-819000/config.json ...
	I0923 04:28:14.056950   20675 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/offline-docker-819000/config.json: {Name:mk5601cba8522554d5c8b2fe15a5fa92ae8ba886 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 04:28:14.057238   20675 start.go:360] acquireMachinesLock for offline-docker-819000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:28:14.057272   20675 start.go:364] duration metric: took 24.542µs to acquireMachinesLock for "offline-docker-819000"
	I0923 04:28:14.057284   20675 start.go:93] Provisioning new machine with config: &{Name:offline-docker-819000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-819000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:28:14.057315   20675 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:28:14.064714   20675 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0923 04:28:14.080889   20675 start.go:159] libmachine.API.Create for "offline-docker-819000" (driver="qemu2")
	I0923 04:28:14.080924   20675 client.go:168] LocalClient.Create starting
	I0923 04:28:14.081022   20675 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:28:14.081057   20675 main.go:141] libmachine: Decoding PEM data...
	I0923 04:28:14.081066   20675 main.go:141] libmachine: Parsing certificate...
	I0923 04:28:14.081112   20675 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:28:14.081135   20675 main.go:141] libmachine: Decoding PEM data...
	I0923 04:28:14.081144   20675 main.go:141] libmachine: Parsing certificate...
	I0923 04:28:14.081519   20675 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:28:14.425086   20675 main.go:141] libmachine: Creating SSH key...
	I0923 04:28:14.578270   20675 main.go:141] libmachine: Creating Disk image...
	I0923 04:28:14.578277   20675 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:28:14.578466   20675 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/offline-docker-819000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/offline-docker-819000/disk.qcow2
	I0923 04:28:14.587836   20675 main.go:141] libmachine: STDOUT: 
	I0923 04:28:14.587861   20675 main.go:141] libmachine: STDERR: 
	I0923 04:28:14.587935   20675 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/offline-docker-819000/disk.qcow2 +20000M
	I0923 04:28:14.596903   20675 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:28:14.596922   20675 main.go:141] libmachine: STDERR: 
	I0923 04:28:14.596954   20675 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/offline-docker-819000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/offline-docker-819000/disk.qcow2
	I0923 04:28:14.596966   20675 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:28:14.596982   20675 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:28:14.597011   20675 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/offline-docker-819000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/offline-docker-819000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/offline-docker-819000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:7b:03:f8:9c:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/offline-docker-819000/disk.qcow2
	I0923 04:28:14.598984   20675 main.go:141] libmachine: STDOUT: 
	I0923 04:28:14.599001   20675 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:28:14.599018   20675 client.go:171] duration metric: took 518.090292ms to LocalClient.Create
	I0923 04:28:16.601184   20675 start.go:128] duration metric: took 2.5438615s to createHost
	I0923 04:28:16.601232   20675 start.go:83] releasing machines lock for "offline-docker-819000", held for 2.543963625s
	W0923 04:28:16.601275   20675 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:28:16.615711   20675 out.go:177] * Deleting "offline-docker-819000" in qemu2 ...
	W0923 04:28:16.638122   20675 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:28:16.638134   20675 start.go:729] Will try again in 5 seconds ...
	I0923 04:28:21.640331   20675 start.go:360] acquireMachinesLock for offline-docker-819000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:28:21.640741   20675 start.go:364] duration metric: took 323.208µs to acquireMachinesLock for "offline-docker-819000"
	I0923 04:28:21.640866   20675 start.go:93] Provisioning new machine with config: &{Name:offline-docker-819000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-819000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:28:21.641136   20675 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:28:21.645561   20675 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0923 04:28:21.698188   20675 start.go:159] libmachine.API.Create for "offline-docker-819000" (driver="qemu2")
	I0923 04:28:21.698235   20675 client.go:168] LocalClient.Create starting
	I0923 04:28:21.698350   20675 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:28:21.698419   20675 main.go:141] libmachine: Decoding PEM data...
	I0923 04:28:21.698437   20675 main.go:141] libmachine: Parsing certificate...
	I0923 04:28:21.698499   20675 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:28:21.698543   20675 main.go:141] libmachine: Decoding PEM data...
	I0923 04:28:21.698553   20675 main.go:141] libmachine: Parsing certificate...
	I0923 04:28:21.699210   20675 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:28:21.888454   20675 main.go:141] libmachine: Creating SSH key...
	I0923 04:28:22.069461   20675 main.go:141] libmachine: Creating Disk image...
	I0923 04:28:22.069476   20675 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:28:22.069688   20675 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/offline-docker-819000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/offline-docker-819000/disk.qcow2
	I0923 04:28:22.078904   20675 main.go:141] libmachine: STDOUT: 
	I0923 04:28:22.078918   20675 main.go:141] libmachine: STDERR: 
	I0923 04:28:22.078982   20675 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/offline-docker-819000/disk.qcow2 +20000M
	I0923 04:28:22.086898   20675 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:28:22.086913   20675 main.go:141] libmachine: STDERR: 
	I0923 04:28:22.086925   20675 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/offline-docker-819000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/offline-docker-819000/disk.qcow2
	I0923 04:28:22.086930   20675 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:28:22.086939   20675 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:28:22.086972   20675 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/offline-docker-819000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/offline-docker-819000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/offline-docker-819000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:e6:cb:e2:04:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/offline-docker-819000/disk.qcow2
	I0923 04:28:22.088519   20675 main.go:141] libmachine: STDOUT: 
	I0923 04:28:22.088533   20675 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:28:22.088547   20675 client.go:171] duration metric: took 390.307833ms to LocalClient.Create
	I0923 04:28:24.091175   20675 start.go:128] duration metric: took 2.449692s to createHost
	I0923 04:28:24.091297   20675 start.go:83] releasing machines lock for "offline-docker-819000", held for 2.450544875s
	W0923 04:28:24.091652   20675 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-819000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-819000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:28:24.102145   20675 out.go:201] 
	W0923 04:28:24.110208   20675 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:28:24.110253   20675 out.go:270] * 
	* 
	W0923 04:28:24.112902   20675 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:28:24.122139   20675 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-819000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-09-23 04:28:24.139862 -0700 PDT m=+732.141045835
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-819000 -n offline-docker-819000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-819000 -n offline-docker-819000: exit status 7 (66.148917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-819000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-819000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-819000
--- FAIL: TestOffline (10.35s)

                                                
                                    
x
+
TestAddons/Setup (10.73s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-040000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-040000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.729330708s)

                                                
                                                
-- stdout --
	* [addons-040000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-040000" primary control-plane node in "addons-040000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-040000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:16:34.314911   19013 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:16:34.315024   19013 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:16:34.315027   19013 out.go:358] Setting ErrFile to fd 2...
	I0923 04:16:34.315030   19013 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:16:34.315155   19013 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:16:34.316359   19013 out.go:352] Setting JSON to false
	I0923 04:16:34.332730   19013 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8165,"bootTime":1727082029,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:16:34.332795   19013 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:16:34.337602   19013 out.go:177] * [addons-040000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:16:34.344567   19013 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 04:16:34.344600   19013 notify.go:220] Checking for updates...
	I0923 04:16:34.352549   19013 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:16:34.355586   19013 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:16:34.358550   19013 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:16:34.361561   19013 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	I0923 04:16:34.364569   19013 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 04:16:34.367737   19013 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:16:34.371523   19013 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 04:16:34.378604   19013 start.go:297] selected driver: qemu2
	I0923 04:16:34.378611   19013 start.go:901] validating driver "qemu2" against <nil>
	I0923 04:16:34.378618   19013 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 04:16:34.381280   19013 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 04:16:34.393043   19013 out.go:177] * Automatically selected the socket_vmnet network
	I0923 04:16:34.396688   19013 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 04:16:34.396708   19013 cni.go:84] Creating CNI manager for ""
	I0923 04:16:34.396737   19013 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 04:16:34.396742   19013 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 04:16:34.396771   19013 start.go:340] cluster config:
	{Name:addons-040000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-040000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:16:34.400709   19013 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:16:34.409604   19013 out.go:177] * Starting "addons-040000" primary control-plane node in "addons-040000" cluster
	I0923 04:16:34.412573   19013 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 04:16:34.412589   19013 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 04:16:34.412599   19013 cache.go:56] Caching tarball of preloaded images
	I0923 04:16:34.412660   19013 preload.go:172] Found /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 04:16:34.412666   19013 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 04:16:34.412888   19013 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/addons-040000/config.json ...
	I0923 04:16:34.412901   19013 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/addons-040000/config.json: {Name:mke38092eb7a0e765628e891785122eddbe5aaf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 04:16:34.413160   19013 start.go:360] acquireMachinesLock for addons-040000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:16:34.413237   19013 start.go:364] duration metric: took 71.875µs to acquireMachinesLock for "addons-040000"
	I0923 04:16:34.413252   19013 start.go:93] Provisioning new machine with config: &{Name:addons-040000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-040000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:16:34.413287   19013 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:16:34.417592   19013 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0923 04:16:34.436278   19013 start.go:159] libmachine.API.Create for "addons-040000" (driver="qemu2")
	I0923 04:16:34.436306   19013 client.go:168] LocalClient.Create starting
	I0923 04:16:34.436445   19013 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:16:34.578074   19013 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:16:34.723642   19013 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:16:35.367723   19013 main.go:141] libmachine: Creating SSH key...
	I0923 04:16:35.485602   19013 main.go:141] libmachine: Creating Disk image...
	I0923 04:16:35.485609   19013 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:16:35.485811   19013 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/addons-040000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/addons-040000/disk.qcow2
	I0923 04:16:35.494967   19013 main.go:141] libmachine: STDOUT: 
	I0923 04:16:35.494992   19013 main.go:141] libmachine: STDERR: 
	I0923 04:16:35.495053   19013 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/addons-040000/disk.qcow2 +20000M
	I0923 04:16:35.502961   19013 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:16:35.502974   19013 main.go:141] libmachine: STDERR: 
	I0923 04:16:35.502990   19013 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/addons-040000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/addons-040000/disk.qcow2
	I0923 04:16:35.502998   19013 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:16:35.503034   19013 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:16:35.503062   19013 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/addons-040000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/addons-040000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/addons-040000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:dc:8e:c3:4a:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/addons-040000/disk.qcow2
	I0923 04:16:35.504636   19013 main.go:141] libmachine: STDOUT: 
	I0923 04:16:35.504651   19013 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:16:35.504681   19013 client.go:171] duration metric: took 1.068366875s to LocalClient.Create
	I0923 04:16:37.506916   19013 start.go:128] duration metric: took 3.093611584s to createHost
	I0923 04:16:37.507011   19013 start.go:83] releasing machines lock for "addons-040000", held for 3.0937815s
	W0923 04:16:37.507127   19013 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:16:37.526485   19013 out.go:177] * Deleting "addons-040000" in qemu2 ...
	W0923 04:16:37.557628   19013 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:16:37.557661   19013 start.go:729] Will try again in 5 seconds ...
	I0923 04:16:42.559930   19013 start.go:360] acquireMachinesLock for addons-040000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:16:42.560457   19013 start.go:364] duration metric: took 397.541µs to acquireMachinesLock for "addons-040000"
	I0923 04:16:42.560591   19013 start.go:93] Provisioning new machine with config: &{Name:addons-040000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-040000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:16:42.560831   19013 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:16:42.580692   19013 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0923 04:16:42.633285   19013 start.go:159] libmachine.API.Create for "addons-040000" (driver="qemu2")
	I0923 04:16:42.633327   19013 client.go:168] LocalClient.Create starting
	I0923 04:16:42.633458   19013 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:16:42.633526   19013 main.go:141] libmachine: Decoding PEM data...
	I0923 04:16:42.633547   19013 main.go:141] libmachine: Parsing certificate...
	I0923 04:16:42.633642   19013 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:16:42.633709   19013 main.go:141] libmachine: Decoding PEM data...
	I0923 04:16:42.633720   19013 main.go:141] libmachine: Parsing certificate...
	I0923 04:16:42.634247   19013 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:16:42.826627   19013 main.go:141] libmachine: Creating SSH key...
	I0923 04:16:42.950446   19013 main.go:141] libmachine: Creating Disk image...
	I0923 04:16:42.950454   19013 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:16:42.950651   19013 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/addons-040000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/addons-040000/disk.qcow2
	I0923 04:16:42.959818   19013 main.go:141] libmachine: STDOUT: 
	I0923 04:16:42.959860   19013 main.go:141] libmachine: STDERR: 
	I0923 04:16:42.959925   19013 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/addons-040000/disk.qcow2 +20000M
	I0923 04:16:42.967794   19013 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:16:42.967872   19013 main.go:141] libmachine: STDERR: 
	I0923 04:16:42.967893   19013 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/addons-040000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/addons-040000/disk.qcow2
	I0923 04:16:42.967898   19013 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:16:42.967910   19013 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:16:42.967943   19013 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/addons-040000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/addons-040000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/addons-040000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:69:11:7b:d1:93 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/addons-040000/disk.qcow2
	I0923 04:16:42.969658   19013 main.go:141] libmachine: STDOUT: 
	I0923 04:16:42.969675   19013 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:16:42.969690   19013 client.go:171] duration metric: took 336.35975ms to LocalClient.Create
	I0923 04:16:44.971936   19013 start.go:128] duration metric: took 2.411066375s to createHost
	I0923 04:16:44.972013   19013 start.go:83] releasing machines lock for "addons-040000", held for 2.411543167s
	W0923 04:16:44.972468   19013 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p addons-040000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-040000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:16:44.983065   19013 out.go:201] 
	W0923 04:16:44.989171   19013 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:16:44.989195   19013 out.go:270] * 
	* 
	W0923 04:16:44.991939   19013 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:16:45.000084   19013 out.go:201] 

                                                
                                                
** /stderr **
addons_test.go:109: out/minikube-darwin-arm64 start -p addons-040000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.73s)

                                                
                                    
x
+
TestCertOptions (10.14s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-600000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-600000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.874951416s)

                                                
                                                
-- stdout --
	* [cert-options-600000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-600000" primary control-plane node in "cert-options-600000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-600000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-600000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-600000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-600000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-600000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (82.8605ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-600000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-600000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-600000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-600000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-600000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-600000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (43.217916ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-600000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-600000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-600000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-600000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-600000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-09-23 04:40:01.902166 -0700 PDT m=+1429.906518876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-600000 -n cert-options-600000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-600000 -n cert-options-600000: exit status 7 (30.874ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-600000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-600000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-600000
--- FAIL: TestCertOptions (10.14s)

                                                
                                    
x
+
TestCertExpiration (197.8s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-265000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-265000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (12.412133667s)

                                                
                                                
-- stdout --
	* [cert-expiration-265000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-265000" primary control-plane node in "cert-expiration-265000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-265000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-265000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-265000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-265000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-265000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.231175542s)

                                                
                                                
-- stdout --
	* [cert-expiration-265000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-265000" primary control-plane node in "cert-expiration-265000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-265000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-265000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-265000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-265000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-265000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-265000" primary control-plane node in "cert-expiration-265000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-265000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-265000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-265000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-09-23 04:42:47.057503 -0700 PDT m=+1595.062607126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-265000 -n cert-expiration-265000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-265000 -n cert-expiration-265000: exit status 7 (66.185958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-265000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-265000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-265000
--- FAIL: TestCertExpiration (197.80s)

                                                
                                    
x
+
TestDockerFlags (12.4s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-346000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-346000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (12.152382208s)

                                                
                                                
-- stdout --
	* [docker-flags-346000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-346000" primary control-plane node in "docker-flags-346000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-346000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:39:39.502661   21601 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:39:39.502855   21601 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:39:39.502863   21601 out.go:358] Setting ErrFile to fd 2...
	I0923 04:39:39.502865   21601 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:39:39.503018   21601 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:39:39.504448   21601 out.go:352] Setting JSON to false
	I0923 04:39:39.524075   21601 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9550,"bootTime":1727082029,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:39:39.524180   21601 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:39:39.533035   21601 out.go:177] * [docker-flags-346000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:39:39.542087   21601 notify.go:220] Checking for updates...
	I0923 04:39:39.549027   21601 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 04:39:39.556954   21601 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:39:39.566950   21601 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:39:39.575003   21601 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:39:39.583007   21601 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	I0923 04:39:39.590908   21601 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 04:39:39.596462   21601 config.go:182] Loaded profile config "cert-expiration-265000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:39:39.596546   21601 config.go:182] Loaded profile config "multinode-090000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:39:39.596604   21601 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:39:39.601992   21601 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 04:39:39.609816   21601 start.go:297] selected driver: qemu2
	I0923 04:39:39.609823   21601 start.go:901] validating driver "qemu2" against <nil>
	I0923 04:39:39.609832   21601 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 04:39:39.612836   21601 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 04:39:39.619038   21601 out.go:177] * Automatically selected the socket_vmnet network
	I0923 04:39:39.623109   21601 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0923 04:39:39.623138   21601 cni.go:84] Creating CNI manager for ""
	I0923 04:39:39.623179   21601 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 04:39:39.623191   21601 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 04:39:39.623232   21601 start.go:340] cluster config:
	{Name:docker-flags-346000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-346000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:39:39.628118   21601 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:39:39.640048   21601 out.go:177] * Starting "docker-flags-346000" primary control-plane node in "docker-flags-346000" cluster
	I0923 04:39:39.645075   21601 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 04:39:39.645102   21601 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 04:39:39.645111   21601 cache.go:56] Caching tarball of preloaded images
	I0923 04:39:39.645219   21601 preload.go:172] Found /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 04:39:39.645226   21601 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 04:39:39.645310   21601 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/docker-flags-346000/config.json ...
	I0923 04:39:39.645327   21601 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/docker-flags-346000/config.json: {Name:mkd8735c2021a9d73112dc2a0fc97c13fb988a55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 04:39:39.645702   21601 start.go:360] acquireMachinesLock for docker-flags-346000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:39:41.779409   21601 start.go:364] duration metric: took 2.133687s to acquireMachinesLock for "docker-flags-346000"
	I0923 04:39:41.779621   21601 start.go:93] Provisioning new machine with config: &{Name:docker-flags-346000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-346000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:39:41.779800   21601 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:39:41.792085   21601 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0923 04:39:41.842980   21601 start.go:159] libmachine.API.Create for "docker-flags-346000" (driver="qemu2")
	I0923 04:39:41.843057   21601 client.go:168] LocalClient.Create starting
	I0923 04:39:41.843244   21601 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:39:41.843313   21601 main.go:141] libmachine: Decoding PEM data...
	I0923 04:39:41.843334   21601 main.go:141] libmachine: Parsing certificate...
	I0923 04:39:41.843407   21601 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:39:41.843453   21601 main.go:141] libmachine: Decoding PEM data...
	I0923 04:39:41.843469   21601 main.go:141] libmachine: Parsing certificate...
	I0923 04:39:41.844124   21601 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:39:42.024217   21601 main.go:141] libmachine: Creating SSH key...
	I0923 04:39:42.114998   21601 main.go:141] libmachine: Creating Disk image...
	I0923 04:39:42.115004   21601 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:39:42.115204   21601 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/docker-flags-346000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/docker-flags-346000/disk.qcow2
	I0923 04:39:42.124333   21601 main.go:141] libmachine: STDOUT: 
	I0923 04:39:42.124349   21601 main.go:141] libmachine: STDERR: 
	I0923 04:39:42.124417   21601 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/docker-flags-346000/disk.qcow2 +20000M
	I0923 04:39:42.132303   21601 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:39:42.132320   21601 main.go:141] libmachine: STDERR: 
	I0923 04:39:42.132337   21601 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/docker-flags-346000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/docker-flags-346000/disk.qcow2
	I0923 04:39:42.132343   21601 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:39:42.132353   21601 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:39:42.132382   21601 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/docker-flags-346000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/docker-flags-346000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/docker-flags-346000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:95:24:fc:34:68 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/docker-flags-346000/disk.qcow2
	I0923 04:39:42.134110   21601 main.go:141] libmachine: STDOUT: 
	I0923 04:39:42.134128   21601 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:39:42.134149   21601 client.go:171] duration metric: took 291.070792ms to LocalClient.Create
	I0923 04:39:44.136309   21601 start.go:128] duration metric: took 2.3564875s to createHost
	I0923 04:39:44.136371   21601 start.go:83] releasing machines lock for "docker-flags-346000", held for 2.356909333s
	W0923 04:39:44.136472   21601 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:39:44.147859   21601 out.go:177] * Deleting "docker-flags-346000" in qemu2 ...
	W0923 04:39:44.178167   21601 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:39:44.178193   21601 start.go:729] Will try again in 5 seconds ...
	I0923 04:39:49.180467   21601 start.go:360] acquireMachinesLock for docker-flags-346000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:39:49.180939   21601 start.go:364] duration metric: took 354.916µs to acquireMachinesLock for "docker-flags-346000"
	I0923 04:39:49.181052   21601 start.go:93] Provisioning new machine with config: &{Name:docker-flags-346000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-346000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:39:49.181371   21601 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:39:49.186332   21601 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0923 04:39:49.236378   21601 start.go:159] libmachine.API.Create for "docker-flags-346000" (driver="qemu2")
	I0923 04:39:49.236427   21601 client.go:168] LocalClient.Create starting
	I0923 04:39:49.236547   21601 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:39:49.236644   21601 main.go:141] libmachine: Decoding PEM data...
	I0923 04:39:49.236664   21601 main.go:141] libmachine: Parsing certificate...
	I0923 04:39:49.236730   21601 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:39:49.236775   21601 main.go:141] libmachine: Decoding PEM data...
	I0923 04:39:49.236788   21601 main.go:141] libmachine: Parsing certificate...
	I0923 04:39:49.237684   21601 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:39:49.422591   21601 main.go:141] libmachine: Creating SSH key...
	I0923 04:39:49.557204   21601 main.go:141] libmachine: Creating Disk image...
	I0923 04:39:49.557210   21601 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:39:49.557660   21601 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/docker-flags-346000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/docker-flags-346000/disk.qcow2
	I0923 04:39:49.567089   21601 main.go:141] libmachine: STDOUT: 
	I0923 04:39:49.567105   21601 main.go:141] libmachine: STDERR: 
	I0923 04:39:49.567151   21601 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/docker-flags-346000/disk.qcow2 +20000M
	I0923 04:39:49.574996   21601 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:39:49.575009   21601 main.go:141] libmachine: STDERR: 
	I0923 04:39:49.575024   21601 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/docker-flags-346000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/docker-flags-346000/disk.qcow2
	I0923 04:39:49.575028   21601 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:39:49.575040   21601 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:39:49.575066   21601 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/docker-flags-346000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/docker-flags-346000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/docker-flags-346000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:0a:3c:40:09:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/docker-flags-346000/disk.qcow2
	I0923 04:39:49.576745   21601 main.go:141] libmachine: STDOUT: 
	I0923 04:39:49.576760   21601 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:39:49.576771   21601 client.go:171] duration metric: took 340.339625ms to LocalClient.Create
	I0923 04:39:51.578933   21601 start.go:128] duration metric: took 2.397539291s to createHost
	I0923 04:39:51.578992   21601 start.go:83] releasing machines lock for "docker-flags-346000", held for 2.398038292s
	W0923 04:39:51.579378   21601 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-346000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-346000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:39:51.589860   21601 out.go:201] 
	W0923 04:39:51.597111   21601 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:39:51.597137   21601 out.go:270] * 
	* 
	W0923 04:39:51.599650   21601 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:39:51.610009   21601 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-346000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-346000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-346000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (83.611875ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-346000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-346000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-346000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-346000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-346000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-346000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-346000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-346000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-346000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (48.010583ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-346000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-346000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-346000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-346000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-346000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-346000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-09-23 04:39:51.757508 -0700 PDT m=+1419.761814293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-346000 -n docker-flags-346000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-346000 -n docker-flags-346000: exit status 7 (31.058958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-346000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-346000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-346000
--- FAIL: TestDockerFlags (12.40s)

                                                
                                    
x
+
TestForceSystemdFlag (10.92s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-464000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-464000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.730029458s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-464000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-464000" primary control-plane node in "force-systemd-flag-464000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-464000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:39:04.673820   21450 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:39:04.673946   21450 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:39:04.673949   21450 out.go:358] Setting ErrFile to fd 2...
	I0923 04:39:04.673951   21450 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:39:04.674098   21450 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:39:04.675214   21450 out.go:352] Setting JSON to false
	I0923 04:39:04.691380   21450 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9515,"bootTime":1727082029,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:39:04.691458   21450 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:39:04.695578   21450 out.go:177] * [force-systemd-flag-464000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:39:04.702498   21450 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 04:39:04.702566   21450 notify.go:220] Checking for updates...
	I0923 04:39:04.709468   21450 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:39:04.712484   21450 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:39:04.715464   21450 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:39:04.718411   21450 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	I0923 04:39:04.721464   21450 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 04:39:04.724850   21450 config.go:182] Loaded profile config "NoKubernetes-857000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:39:04.724922   21450 config.go:182] Loaded profile config "multinode-090000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:39:04.724968   21450 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:39:04.728378   21450 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 04:39:04.735486   21450 start.go:297] selected driver: qemu2
	I0923 04:39:04.735492   21450 start.go:901] validating driver "qemu2" against <nil>
	I0923 04:39:04.735499   21450 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 04:39:04.737918   21450 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 04:39:04.739283   21450 out.go:177] * Automatically selected the socket_vmnet network
	I0923 04:39:04.742586   21450 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 04:39:04.742611   21450 cni.go:84] Creating CNI manager for ""
	I0923 04:39:04.742640   21450 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 04:39:04.742647   21450 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 04:39:04.742672   21450 start.go:340] cluster config:
	{Name:force-systemd-flag-464000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-464000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:39:04.746589   21450 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:39:04.754459   21450 out.go:177] * Starting "force-systemd-flag-464000" primary control-plane node in "force-systemd-flag-464000" cluster
	I0923 04:39:04.758516   21450 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 04:39:04.758533   21450 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 04:39:04.758541   21450 cache.go:56] Caching tarball of preloaded images
	I0923 04:39:04.758620   21450 preload.go:172] Found /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 04:39:04.758627   21450 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 04:39:04.758687   21450 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/force-systemd-flag-464000/config.json ...
	I0923 04:39:04.758702   21450 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/force-systemd-flag-464000/config.json: {Name:mka47507a27e38e10efefd53af05860d42a1cfea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 04:39:04.758931   21450 start.go:360] acquireMachinesLock for force-systemd-flag-464000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:39:05.372920   21450 start.go:364] duration metric: took 613.956333ms to acquireMachinesLock for "force-systemd-flag-464000"
	I0923 04:39:05.373206   21450 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-464000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-464000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:39:05.373461   21450 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:39:05.385635   21450 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0923 04:39:05.435802   21450 start.go:159] libmachine.API.Create for "force-systemd-flag-464000" (driver="qemu2")
	I0923 04:39:05.435857   21450 client.go:168] LocalClient.Create starting
	I0923 04:39:05.435987   21450 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:39:05.436045   21450 main.go:141] libmachine: Decoding PEM data...
	I0923 04:39:05.436062   21450 main.go:141] libmachine: Parsing certificate...
	I0923 04:39:05.436134   21450 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:39:05.436187   21450 main.go:141] libmachine: Decoding PEM data...
	I0923 04:39:05.436201   21450 main.go:141] libmachine: Parsing certificate...
	I0923 04:39:05.436982   21450 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:39:05.725980   21450 main.go:141] libmachine: Creating SSH key...
	I0923 04:39:05.791638   21450 main.go:141] libmachine: Creating Disk image...
	I0923 04:39:05.791647   21450 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:39:05.791868   21450 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/force-systemd-flag-464000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/force-systemd-flag-464000/disk.qcow2
	I0923 04:39:05.801252   21450 main.go:141] libmachine: STDOUT: 
	I0923 04:39:05.801266   21450 main.go:141] libmachine: STDERR: 
	I0923 04:39:05.801336   21450 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/force-systemd-flag-464000/disk.qcow2 +20000M
	I0923 04:39:05.809129   21450 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:39:05.809144   21450 main.go:141] libmachine: STDERR: 
	I0923 04:39:05.809167   21450 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/force-systemd-flag-464000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/force-systemd-flag-464000/disk.qcow2
	I0923 04:39:05.809174   21450 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:39:05.809188   21450 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:39:05.809217   21450 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/force-systemd-flag-464000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/force-systemd-flag-464000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/force-systemd-flag-464000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:b4:6b:15:64:c2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/force-systemd-flag-464000/disk.qcow2
	I0923 04:39:05.810887   21450 main.go:141] libmachine: STDOUT: 
	I0923 04:39:05.810898   21450 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:39:05.810917   21450 client.go:171] duration metric: took 375.054541ms to LocalClient.Create
	I0923 04:39:07.813075   21450 start.go:128] duration metric: took 2.439591625s to createHost
	I0923 04:39:07.813126   21450 start.go:83] releasing machines lock for "force-systemd-flag-464000", held for 2.440078334s
	W0923 04:39:07.813254   21450 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:39:07.834808   21450 out.go:177] * Deleting "force-systemd-flag-464000" in qemu2 ...
	W0923 04:39:07.872152   21450 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:39:07.872178   21450 start.go:729] Will try again in 5 seconds ...
	I0923 04:39:12.874290   21450 start.go:360] acquireMachinesLock for force-systemd-flag-464000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:39:12.878059   21450 start.go:364] duration metric: took 3.6745ms to acquireMachinesLock for "force-systemd-flag-464000"
	I0923 04:39:12.878140   21450 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-464000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-464000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:39:12.878385   21450 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:39:12.888307   21450 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0923 04:39:12.936746   21450 start.go:159] libmachine.API.Create for "force-systemd-flag-464000" (driver="qemu2")
	I0923 04:39:12.936786   21450 client.go:168] LocalClient.Create starting
	I0923 04:39:12.936892   21450 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:39:12.936955   21450 main.go:141] libmachine: Decoding PEM data...
	I0923 04:39:12.936999   21450 main.go:141] libmachine: Parsing certificate...
	I0923 04:39:12.937058   21450 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:39:12.937104   21450 main.go:141] libmachine: Decoding PEM data...
	I0923 04:39:12.937119   21450 main.go:141] libmachine: Parsing certificate...
	I0923 04:39:12.937620   21450 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:39:13.202336   21450 main.go:141] libmachine: Creating SSH key...
	I0923 04:39:13.301435   21450 main.go:141] libmachine: Creating Disk image...
	I0923 04:39:13.301441   21450 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:39:13.301663   21450 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/force-systemd-flag-464000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/force-systemd-flag-464000/disk.qcow2
	I0923 04:39:13.311157   21450 main.go:141] libmachine: STDOUT: 
	I0923 04:39:13.311172   21450 main.go:141] libmachine: STDERR: 
	I0923 04:39:13.311230   21450 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/force-systemd-flag-464000/disk.qcow2 +20000M
	I0923 04:39:13.319075   21450 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:39:13.319094   21450 main.go:141] libmachine: STDERR: 
	I0923 04:39:13.319106   21450 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/force-systemd-flag-464000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/force-systemd-flag-464000/disk.qcow2
	I0923 04:39:13.319110   21450 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:39:13.319119   21450 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:39:13.319151   21450 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/force-systemd-flag-464000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/force-systemd-flag-464000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/force-systemd-flag-464000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:2f:45:f9:63:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/force-systemd-flag-464000/disk.qcow2
	I0923 04:39:13.320838   21450 main.go:141] libmachine: STDOUT: 
	I0923 04:39:13.320866   21450 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:39:13.320880   21450 client.go:171] duration metric: took 384.090542ms to LocalClient.Create
	I0923 04:39:15.323185   21450 start.go:128] duration metric: took 2.444763792s to createHost
	I0923 04:39:15.323248   21450 start.go:83] releasing machines lock for "force-systemd-flag-464000", held for 2.445159791s
	W0923 04:39:15.323631   21450 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-464000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-464000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:39:15.333108   21450 out.go:201] 
	W0923 04:39:15.344635   21450 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:39:15.344663   21450 out.go:270] * 
	* 
	W0923 04:39:15.347090   21450 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:39:15.360136   21450 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-464000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-464000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-464000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (78.42275ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-464000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-464000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-464000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-09-23 04:39:15.456411 -0700 PDT m=+1383.460552960
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-464000 -n force-systemd-flag-464000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-464000 -n force-systemd-flag-464000: exit status 7 (33.515833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-464000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-464000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-464000
--- FAIL: TestForceSystemdFlag (10.92s)

                                                
                                    
x
+
TestForceSystemdEnv (10.2s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-164000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-164000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.976990666s)

                                                
                                                
-- stdout --
	* [force-systemd-env-164000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-164000" primary control-plane node in "force-systemd-env-164000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-164000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:39:29.303530   21566 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:39:29.303677   21566 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:39:29.303680   21566 out.go:358] Setting ErrFile to fd 2...
	I0923 04:39:29.303683   21566 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:39:29.303815   21566 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:39:29.305130   21566 out.go:352] Setting JSON to false
	I0923 04:39:29.322032   21566 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9540,"bootTime":1727082029,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:39:29.322106   21566 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:39:29.327254   21566 out.go:177] * [force-systemd-env-164000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:39:29.337300   21566 notify.go:220] Checking for updates...
	I0923 04:39:29.341160   21566 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 04:39:29.352229   21566 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:39:29.359193   21566 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:39:29.373726   21566 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:39:29.389281   21566 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	I0923 04:39:29.400299   21566 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0923 04:39:29.405638   21566 config.go:182] Loaded profile config "multinode-090000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:39:29.405695   21566 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:39:29.414254   21566 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 04:39:29.421210   21566 start.go:297] selected driver: qemu2
	I0923 04:39:29.421219   21566 start.go:901] validating driver "qemu2" against <nil>
	I0923 04:39:29.421226   21566 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 04:39:29.423694   21566 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 04:39:29.427214   21566 out.go:177] * Automatically selected the socket_vmnet network
	I0923 04:39:29.428579   21566 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 04:39:29.428600   21566 cni.go:84] Creating CNI manager for ""
	I0923 04:39:29.428631   21566 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 04:39:29.428635   21566 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 04:39:29.428677   21566 start.go:340] cluster config:
	{Name:force-systemd-env-164000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-164000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:39:29.433341   21566 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:39:29.441293   21566 out.go:177] * Starting "force-systemd-env-164000" primary control-plane node in "force-systemd-env-164000" cluster
	I0923 04:39:29.445235   21566 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 04:39:29.445252   21566 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 04:39:29.445260   21566 cache.go:56] Caching tarball of preloaded images
	I0923 04:39:29.445331   21566 preload.go:172] Found /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 04:39:29.445336   21566 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 04:39:29.445397   21566 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/force-systemd-env-164000/config.json ...
	I0923 04:39:29.445406   21566 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/force-systemd-env-164000/config.json: {Name:mkd14b7be0b36a128d6b3cbf9597664cdada0f9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 04:39:29.445606   21566 start.go:360] acquireMachinesLock for force-systemd-env-164000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:39:29.445643   21566 start.go:364] duration metric: took 29.5µs to acquireMachinesLock for "force-systemd-env-164000"
	I0923 04:39:29.445653   21566 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-164000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-164000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:39:29.445679   21566 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:39:29.449178   21566 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0923 04:39:29.464853   21566 start.go:159] libmachine.API.Create for "force-systemd-env-164000" (driver="qemu2")
	I0923 04:39:29.464888   21566 client.go:168] LocalClient.Create starting
	I0923 04:39:29.464948   21566 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:39:29.464979   21566 main.go:141] libmachine: Decoding PEM data...
	I0923 04:39:29.464989   21566 main.go:141] libmachine: Parsing certificate...
	I0923 04:39:29.465027   21566 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:39:29.465051   21566 main.go:141] libmachine: Decoding PEM data...
	I0923 04:39:29.465063   21566 main.go:141] libmachine: Parsing certificate...
	I0923 04:39:29.471874   21566 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:39:29.714030   21566 main.go:141] libmachine: Creating SSH key...
	I0923 04:39:29.815998   21566 main.go:141] libmachine: Creating Disk image...
	I0923 04:39:29.816004   21566 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:39:29.816215   21566 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/force-systemd-env-164000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/force-systemd-env-164000/disk.qcow2
	I0923 04:39:29.825547   21566 main.go:141] libmachine: STDOUT: 
	I0923 04:39:29.825567   21566 main.go:141] libmachine: STDERR: 
	I0923 04:39:29.825635   21566 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/force-systemd-env-164000/disk.qcow2 +20000M
	I0923 04:39:29.833645   21566 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:39:29.833660   21566 main.go:141] libmachine: STDERR: 
	I0923 04:39:29.833684   21566 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/force-systemd-env-164000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/force-systemd-env-164000/disk.qcow2
	I0923 04:39:29.833689   21566 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:39:29.833701   21566 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:39:29.833729   21566 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/force-systemd-env-164000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/force-systemd-env-164000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/force-systemd-env-164000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:b3:05:99:1b:f9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/force-systemd-env-164000/disk.qcow2
	I0923 04:39:29.835406   21566 main.go:141] libmachine: STDOUT: 
	I0923 04:39:29.835420   21566 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:39:29.835439   21566 client.go:171] duration metric: took 370.546375ms to LocalClient.Create
	I0923 04:39:31.837602   21566 start.go:128] duration metric: took 2.391912042s to createHost
	I0923 04:39:31.837686   21566 start.go:83] releasing machines lock for "force-systemd-env-164000", held for 2.392016s
	W0923 04:39:31.837740   21566 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:39:31.861912   21566 out.go:177] * Deleting "force-systemd-env-164000" in qemu2 ...
	W0923 04:39:31.889534   21566 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:39:31.889555   21566 start.go:729] Will try again in 5 seconds ...
	I0923 04:39:36.891821   21566 start.go:360] acquireMachinesLock for force-systemd-env-164000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:39:36.892268   21566 start.go:364] duration metric: took 359.75µs to acquireMachinesLock for "force-systemd-env-164000"
	I0923 04:39:36.892412   21566 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-164000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-164000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:39:36.892703   21566 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:39:36.911428   21566 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0923 04:39:36.961061   21566 start.go:159] libmachine.API.Create for "force-systemd-env-164000" (driver="qemu2")
	I0923 04:39:36.961108   21566 client.go:168] LocalClient.Create starting
	I0923 04:39:36.961226   21566 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:39:36.961297   21566 main.go:141] libmachine: Decoding PEM data...
	I0923 04:39:36.961314   21566 main.go:141] libmachine: Parsing certificate...
	I0923 04:39:36.961378   21566 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:39:36.961422   21566 main.go:141] libmachine: Decoding PEM data...
	I0923 04:39:36.961433   21566 main.go:141] libmachine: Parsing certificate...
	I0923 04:39:36.962091   21566 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:39:37.134957   21566 main.go:141] libmachine: Creating SSH key...
	I0923 04:39:37.175513   21566 main.go:141] libmachine: Creating Disk image...
	I0923 04:39:37.175521   21566 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:39:37.175718   21566 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/force-systemd-env-164000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/force-systemd-env-164000/disk.qcow2
	I0923 04:39:37.184894   21566 main.go:141] libmachine: STDOUT: 
	I0923 04:39:37.184919   21566 main.go:141] libmachine: STDERR: 
	I0923 04:39:37.184973   21566 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/force-systemd-env-164000/disk.qcow2 +20000M
	I0923 04:39:37.192941   21566 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:39:37.192966   21566 main.go:141] libmachine: STDERR: 
	I0923 04:39:37.192985   21566 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/force-systemd-env-164000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/force-systemd-env-164000/disk.qcow2
	I0923 04:39:37.192991   21566 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:39:37.193002   21566 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:39:37.193028   21566 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/force-systemd-env-164000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/force-systemd-env-164000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/force-systemd-env-164000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:65:97:b9:e4:1e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/force-systemd-env-164000/disk.qcow2
	I0923 04:39:37.194644   21566 main.go:141] libmachine: STDOUT: 
	I0923 04:39:37.194659   21566 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:39:37.194670   21566 client.go:171] duration metric: took 233.557208ms to LocalClient.Create
	I0923 04:39:39.196830   21566 start.go:128] duration metric: took 2.304111875s to createHost
	I0923 04:39:39.196887   21566 start.go:83] releasing machines lock for "force-systemd-env-164000", held for 2.304606583s
	W0923 04:39:39.197287   21566 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-164000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-164000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:39:39.212885   21566 out.go:201] 
	W0923 04:39:39.216957   21566 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:39:39.217029   21566 out.go:270] * 
	* 
	W0923 04:39:39.219635   21566 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:39:39.235848   21566 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-164000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-164000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-164000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (77.669542ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-164000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-164000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-164000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-09-23 04:39:39.331191 -0700 PDT m=+1407.335440876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-164000 -n force-systemd-env-164000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-164000 -n force-systemd-env-164000: exit status 7 (34.169916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-164000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-164000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-164000
--- FAIL: TestForceSystemdEnv (10.20s)

                                                
                                    
x
+
TestErrorSpam/setup (9.89s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-693000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-693000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 --driver=qemu2 : exit status 80 (9.891244834s)

                                                
                                                
-- stdout --
	* [nospam-693000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-693000" primary control-plane node in "nospam-693000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-693000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-693000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-693000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-693000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-693000] minikube v1.34.0 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19690
- KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-693000" primary control-plane node in "nospam-693000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-693000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-693000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.89s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.95s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-539000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-539000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.878004541s)

                                                
                                                
-- stdout --
	* [functional-539000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-539000" primary control-plane node in "functional-539000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-539000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:53098 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:53098 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:53098 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-539000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2236: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-539000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2241: start stdout=* [functional-539000] minikube v1.34.0 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19690
- KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-539000" primary control-plane node in "functional-539000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-539000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2246: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:53098 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:53098 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:53098 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-539000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-539000 -n functional-539000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-539000 -n functional-539000: exit status 7 (68.9435ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-539000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.95s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0923 04:17:15.284367   18914 config.go:182] Loaded profile config "functional-539000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-539000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-539000 --alsologtostderr -v=8: exit status 80 (5.185399459s)

                                                
                                                
-- stdout --
	* [functional-539000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-539000" primary control-plane node in "functional-539000" cluster
	* Restarting existing qemu2 VM for "functional-539000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-539000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:17:15.315041   19182 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:17:15.315163   19182 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:17:15.315166   19182 out.go:358] Setting ErrFile to fd 2...
	I0923 04:17:15.315169   19182 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:17:15.315320   19182 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:17:15.316342   19182 out.go:352] Setting JSON to false
	I0923 04:17:15.332301   19182 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8206,"bootTime":1727082029,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:17:15.332370   19182 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:17:15.337870   19182 out.go:177] * [functional-539000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:17:15.345711   19182 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 04:17:15.345740   19182 notify.go:220] Checking for updates...
	I0923 04:17:15.351669   19182 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:17:15.355649   19182 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:17:15.359761   19182 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:17:15.362800   19182 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	I0923 04:17:15.365767   19182 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 04:17:15.369073   19182 config.go:182] Loaded profile config "functional-539000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:17:15.369124   19182 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:17:15.373688   19182 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 04:17:15.380751   19182 start.go:297] selected driver: qemu2
	I0923 04:17:15.380756   19182 start.go:901] validating driver "qemu2" against &{Name:functional-539000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-539000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:17:15.380802   19182 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 04:17:15.383182   19182 cni.go:84] Creating CNI manager for ""
	I0923 04:17:15.383231   19182 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 04:17:15.383274   19182 start.go:340] cluster config:
	{Name:functional-539000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-539000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:17:15.386992   19182 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:17:15.394743   19182 out.go:177] * Starting "functional-539000" primary control-plane node in "functional-539000" cluster
	I0923 04:17:15.397707   19182 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 04:17:15.397723   19182 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 04:17:15.397735   19182 cache.go:56] Caching tarball of preloaded images
	I0923 04:17:15.397809   19182 preload.go:172] Found /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 04:17:15.397816   19182 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 04:17:15.397871   19182 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/functional-539000/config.json ...
	I0923 04:17:15.398346   19182 start.go:360] acquireMachinesLock for functional-539000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:17:15.398374   19182 start.go:364] duration metric: took 22.334µs to acquireMachinesLock for "functional-539000"
	I0923 04:17:15.398384   19182 start.go:96] Skipping create...Using existing machine configuration
	I0923 04:17:15.398388   19182 fix.go:54] fixHost starting: 
	I0923 04:17:15.398506   19182 fix.go:112] recreateIfNeeded on functional-539000: state=Stopped err=<nil>
	W0923 04:17:15.398516   19182 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 04:17:15.406507   19182 out.go:177] * Restarting existing qemu2 VM for "functional-539000" ...
	I0923 04:17:15.410678   19182 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:17:15.410714   19182 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/functional-539000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/functional-539000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/functional-539000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:e2:79:50:c7:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/functional-539000/disk.qcow2
	I0923 04:17:15.412893   19182 main.go:141] libmachine: STDOUT: 
	I0923 04:17:15.412913   19182 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:17:15.412947   19182 fix.go:56] duration metric: took 14.556708ms for fixHost
	I0923 04:17:15.412952   19182 start.go:83] releasing machines lock for "functional-539000", held for 14.574084ms
	W0923 04:17:15.412961   19182 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:17:15.412995   19182 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:17:15.413000   19182 start.go:729] Will try again in 5 seconds ...
	I0923 04:17:20.415328   19182 start.go:360] acquireMachinesLock for functional-539000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:17:20.415758   19182 start.go:364] duration metric: took 325.875µs to acquireMachinesLock for "functional-539000"
	I0923 04:17:20.415896   19182 start.go:96] Skipping create...Using existing machine configuration
	I0923 04:17:20.415920   19182 fix.go:54] fixHost starting: 
	I0923 04:17:20.416620   19182 fix.go:112] recreateIfNeeded on functional-539000: state=Stopped err=<nil>
	W0923 04:17:20.416646   19182 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 04:17:20.421339   19182 out.go:177] * Restarting existing qemu2 VM for "functional-539000" ...
	I0923 04:17:20.425085   19182 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:17:20.425288   19182 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/functional-539000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/functional-539000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/functional-539000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:e2:79:50:c7:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/functional-539000/disk.qcow2
	I0923 04:17:20.434389   19182 main.go:141] libmachine: STDOUT: 
	I0923 04:17:20.434454   19182 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:17:20.434546   19182 fix.go:56] duration metric: took 18.628708ms for fixHost
	I0923 04:17:20.434570   19182 start.go:83] releasing machines lock for "functional-539000", held for 18.788208ms
	W0923 04:17:20.434758   19182 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-539000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-539000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:17:20.441991   19182 out.go:201] 
	W0923 04:17:20.446176   19182 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:17:20.446213   19182 out.go:270] * 
	* 
	W0923 04:17:20.448891   19182 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:17:20.457136   19182 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:661: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-539000 --alsologtostderr -v=8": exit status 80
functional_test.go:663: soft start took 5.187325916s for "functional-539000" cluster.
I0923 04:17:20.472086   18914 config.go:182] Loaded profile config "functional-539000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-539000 -n functional-539000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-539000 -n functional-539000: exit status 7 (71.942ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-539000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
functional_test.go:681: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (30.224208ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:683: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:687: expected current-context = "functional-539000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-539000 -n functional-539000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-539000 -n functional-539000: exit status 7 (30.783458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-539000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-539000 get po -A
functional_test.go:696: (dbg) Non-zero exit: kubectl --context functional-539000 get po -A: exit status 1 (26.055833ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-539000

                                                
                                                
** /stderr **
functional_test.go:698: failed to get kubectl pods: args "kubectl --context functional-539000 get po -A" : exit status 1
functional_test.go:702: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-539000\n"*: args "kubectl --context functional-539000 get po -A"
functional_test.go:705: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-539000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-539000 -n functional-539000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-539000 -n functional-539000: exit status 7 (31.017416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-539000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 ssh sudo crictl images
functional_test.go:1124: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 ssh sudo crictl images: exit status 83 (41.940125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
functional_test.go:1126: failed to get images by "out/minikube-darwin-arm64 -p functional-539000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1130: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1147: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (41.667541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
functional_test.go:1150: failed to manually delete image "out/minikube-darwin-arm64 -p functional-539000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (42.043584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1163: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (39.813375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
functional_test.go:1165: expected "out/minikube-darwin-arm64 -p functional-539000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.78s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 kubectl -- --context functional-539000 get pods
functional_test.go:716: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 kubectl -- --context functional-539000 get pods: exit status 1 (746.730209ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-539000
	* no server found for cluster "functional-539000"

                                                
                                                
** /stderr **
functional_test.go:719: failed to get pods. args "out/minikube-darwin-arm64 -p functional-539000 kubectl -- --context functional-539000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-539000 -n functional-539000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-539000 -n functional-539000: exit status 7 (32.736542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-539000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.78s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.04s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-539000 get pods
functional_test.go:741: (dbg) Non-zero exit: out/kubectl --context functional-539000 get pods: exit status 1 (1.005906334s)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-539000
	* no server found for cluster "functional-539000"

                                                
                                                
** /stderr **
functional_test.go:744: failed to run kubectl directly. args "out/kubectl --context functional-539000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-539000 -n functional-539000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-539000 -n functional-539000: exit status 7 (30.305208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-539000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.04s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.25s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-539000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-539000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.177601666s)

                                                
                                                
-- stdout --
	* [functional-539000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-539000" primary control-plane node in "functional-539000" cluster
	* Restarting existing qemu2 VM for "functional-539000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-539000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-539000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:759: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-539000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:761: restart took 5.177999167s for "functional-539000" cluster.
I0923 04:17:30.754360   18914 config.go:182] Loaded profile config "functional-539000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-539000 -n functional-539000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-539000 -n functional-539000: exit status 7 (71.658042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-539000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.25s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-539000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:810: (dbg) Non-zero exit: kubectl --context functional-539000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (29.358875ms)

                                                
                                                
** stderr ** 
	error: context "functional-539000" does not exist

                                                
                                                
** /stderr **
functional_test.go:812: failed to get components. args "kubectl --context functional-539000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-539000 -n functional-539000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-539000 -n functional-539000: exit status 7 (30.963916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-539000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 logs
functional_test.go:1236: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 logs: exit status 83 (79.686792ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-294000 | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
	|         | -p download-only-294000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT | 23 Sep 24 04:16 PDT |
	| delete  | -p download-only-294000                                                  | download-only-294000 | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT | 23 Sep 24 04:16 PDT |
	| start   | -o=json --download-only                                                  | download-only-913000 | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
	|         | -p download-only-913000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT | 23 Sep 24 04:16 PDT |
	| delete  | -p download-only-913000                                                  | download-only-913000 | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT | 23 Sep 24 04:16 PDT |
	| delete  | -p download-only-294000                                                  | download-only-294000 | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT | 23 Sep 24 04:16 PDT |
	| delete  | -p download-only-913000                                                  | download-only-913000 | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT | 23 Sep 24 04:16 PDT |
	| start   | --download-only -p                                                       | binary-mirror-684000 | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
	|         | binary-mirror-684000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:53065                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-684000                                                  | binary-mirror-684000 | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT | 23 Sep 24 04:16 PDT |
	| addons  | enable dashboard -p                                                      | addons-040000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
	|         | addons-040000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-040000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
	|         | addons-040000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-040000 --wait=true                                             | addons-040000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-040000                                                         | addons-040000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT | 23 Sep 24 04:16 PDT |
	| start   | -p nospam-693000 -n=1 --memory=2250 --wait=false                         | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-693000 --log_dir                                                  | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-693000 --log_dir                                                  | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-693000 --log_dir                                                  | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-693000 --log_dir                                                  | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-693000 --log_dir                                                  | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-693000 --log_dir                                                  | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-693000 --log_dir                                                  | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-693000 --log_dir                                                  | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-693000 --log_dir                                                  | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-693000 --log_dir                                                  | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT | 23 Sep 24 04:16 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-693000 --log_dir                                                  | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT | 23 Sep 24 04:17 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-693000 --log_dir                                                  | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT | 23 Sep 24 04:17 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-693000                                                         | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT | 23 Sep 24 04:17 PDT |
	| start   | -p functional-539000                                                     | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-539000                                                     | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-539000 cache add                                              | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT | 23 Sep 24 04:17 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-539000 cache add                                              | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT | 23 Sep 24 04:17 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-539000 cache add                                              | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT | 23 Sep 24 04:17 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-539000 cache add                                              | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT | 23 Sep 24 04:17 PDT |
	|         | minikube-local-cache-test:functional-539000                              |                      |         |         |                     |                     |
	| cache   | functional-539000 cache delete                                           | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT | 23 Sep 24 04:17 PDT |
	|         | minikube-local-cache-test:functional-539000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT | 23 Sep 24 04:17 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT | 23 Sep 24 04:17 PDT |
	| ssh     | functional-539000 ssh sudo                                               | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-539000                                                        | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-539000 ssh                                                    | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-539000 cache reload                                           | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT | 23 Sep 24 04:17 PDT |
	| ssh     | functional-539000 ssh                                                    | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT | 23 Sep 24 04:17 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT | 23 Sep 24 04:17 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-539000 kubectl --                                             | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT |                     |
	|         | --context functional-539000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-539000                                                     | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 04:17:25
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 04:17:25.603208   19271 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:17:25.603321   19271 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:17:25.603323   19271 out.go:358] Setting ErrFile to fd 2...
	I0923 04:17:25.603324   19271 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:17:25.603450   19271 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:17:25.604492   19271 out.go:352] Setting JSON to false
	I0923 04:17:25.620463   19271 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8216,"bootTime":1727082029,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:17:25.620529   19271 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:17:25.626708   19271 out.go:177] * [functional-539000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:17:25.634700   19271 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 04:17:25.634739   19271 notify.go:220] Checking for updates...
	I0923 04:17:25.643589   19271 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:17:25.646521   19271 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:17:25.649624   19271 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:17:25.652671   19271 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	I0923 04:17:25.654027   19271 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 04:17:25.656911   19271 config.go:182] Loaded profile config "functional-539000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:17:25.656961   19271 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:17:25.661624   19271 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 04:17:25.666592   19271 start.go:297] selected driver: qemu2
	I0923 04:17:25.666596   19271 start.go:901] validating driver "qemu2" against &{Name:functional-539000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-539000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:17:25.666664   19271 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 04:17:25.669103   19271 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 04:17:25.669127   19271 cni.go:84] Creating CNI manager for ""
	I0923 04:17:25.669152   19271 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 04:17:25.669192   19271 start.go:340] cluster config:
	{Name:functional-539000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-539000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:17:25.672848   19271 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:17:25.679509   19271 out.go:177] * Starting "functional-539000" primary control-plane node in "functional-539000" cluster
	I0923 04:17:25.683581   19271 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 04:17:25.683595   19271 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 04:17:25.683600   19271 cache.go:56] Caching tarball of preloaded images
	I0923 04:17:25.683665   19271 preload.go:172] Found /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 04:17:25.683669   19271 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 04:17:25.683713   19271 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/functional-539000/config.json ...
	I0923 04:17:25.684191   19271 start.go:360] acquireMachinesLock for functional-539000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:17:25.684228   19271 start.go:364] duration metric: took 32.125µs to acquireMachinesLock for "functional-539000"
	I0923 04:17:25.684236   19271 start.go:96] Skipping create...Using existing machine configuration
	I0923 04:17:25.684238   19271 fix.go:54] fixHost starting: 
	I0923 04:17:25.684361   19271 fix.go:112] recreateIfNeeded on functional-539000: state=Stopped err=<nil>
	W0923 04:17:25.684368   19271 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 04:17:25.692623   19271 out.go:177] * Restarting existing qemu2 VM for "functional-539000" ...
	I0923 04:17:25.696640   19271 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:17:25.696677   19271 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/functional-539000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/functional-539000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/functional-539000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:e2:79:50:c7:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/functional-539000/disk.qcow2
	I0923 04:17:25.698790   19271 main.go:141] libmachine: STDOUT: 
	I0923 04:17:25.698806   19271 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:17:25.698839   19271 fix.go:56] duration metric: took 14.597917ms for fixHost
	I0923 04:17:25.698841   19271 start.go:83] releasing machines lock for "functional-539000", held for 14.610958ms
	W0923 04:17:25.698847   19271 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:17:25.698891   19271 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:17:25.698895   19271 start.go:729] Will try again in 5 seconds ...
	I0923 04:17:30.701050   19271 start.go:360] acquireMachinesLock for functional-539000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:17:30.701394   19271 start.go:364] duration metric: took 282.042µs to acquireMachinesLock for "functional-539000"
	I0923 04:17:30.701483   19271 start.go:96] Skipping create...Using existing machine configuration
	I0923 04:17:30.701496   19271 fix.go:54] fixHost starting: 
	I0923 04:17:30.702185   19271 fix.go:112] recreateIfNeeded on functional-539000: state=Stopped err=<nil>
	W0923 04:17:30.702204   19271 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 04:17:30.709620   19271 out.go:177] * Restarting existing qemu2 VM for "functional-539000" ...
	I0923 04:17:30.712718   19271 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:17:30.712925   19271 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/functional-539000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/functional-539000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/functional-539000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:e2:79:50:c7:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/functional-539000/disk.qcow2
	I0923 04:17:30.721950   19271 main.go:141] libmachine: STDOUT: 
	I0923 04:17:30.722018   19271 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:17:30.722110   19271 fix.go:56] duration metric: took 20.612209ms for fixHost
	I0923 04:17:30.722124   19271 start.go:83] releasing machines lock for "functional-539000", held for 20.712667ms
	W0923 04:17:30.722313   19271 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-539000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:17:30.730618   19271 out.go:201] 
	W0923 04:17:30.733684   19271 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:17:30.733719   19271 out.go:270] * 
	W0923 04:17:30.735609   19271 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:17:30.742655   19271 out.go:201] 
	
	
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
functional_test.go:1238: out/minikube-darwin-arm64 -p functional-539000 logs failed: exit status 83
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-294000 | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
|         | -p download-only-294000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT | 23 Sep 24 04:16 PDT |
| delete  | -p download-only-294000                                                  | download-only-294000 | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT | 23 Sep 24 04:16 PDT |
| start   | -o=json --download-only                                                  | download-only-913000 | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
|         | -p download-only-913000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.1                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT | 23 Sep 24 04:16 PDT |
| delete  | -p download-only-913000                                                  | download-only-913000 | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT | 23 Sep 24 04:16 PDT |
| delete  | -p download-only-294000                                                  | download-only-294000 | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT | 23 Sep 24 04:16 PDT |
| delete  | -p download-only-913000                                                  | download-only-913000 | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT | 23 Sep 24 04:16 PDT |
| start   | --download-only -p                                                       | binary-mirror-684000 | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
|         | binary-mirror-684000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:53065                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-684000                                                  | binary-mirror-684000 | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT | 23 Sep 24 04:16 PDT |
| addons  | enable dashboard -p                                                      | addons-040000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
|         | addons-040000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-040000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
|         | addons-040000                                                            |                      |         |         |                     |                     |
| start   | -p addons-040000 --wait=true                                             | addons-040000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-040000                                                         | addons-040000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT | 23 Sep 24 04:16 PDT |
| start   | -p nospam-693000 -n=1 --memory=2250 --wait=false                         | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-693000 --log_dir                                                  | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-693000 --log_dir                                                  | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-693000 --log_dir                                                  | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-693000 --log_dir                                                  | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-693000 --log_dir                                                  | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-693000 --log_dir                                                  | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-693000 --log_dir                                                  | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-693000 --log_dir                                                  | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-693000 --log_dir                                                  | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-693000 --log_dir                                                  | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT | 23 Sep 24 04:16 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-693000 --log_dir                                                  | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT | 23 Sep 24 04:17 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-693000 --log_dir                                                  | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT | 23 Sep 24 04:17 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-693000                                                         | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT | 23 Sep 24 04:17 PDT |
| start   | -p functional-539000                                                     | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-539000                                                     | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-539000 cache add                                              | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT | 23 Sep 24 04:17 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-539000 cache add                                              | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT | 23 Sep 24 04:17 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-539000 cache add                                              | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT | 23 Sep 24 04:17 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-539000 cache add                                              | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT | 23 Sep 24 04:17 PDT |
|         | minikube-local-cache-test:functional-539000                              |                      |         |         |                     |                     |
| cache   | functional-539000 cache delete                                           | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT | 23 Sep 24 04:17 PDT |
|         | minikube-local-cache-test:functional-539000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT | 23 Sep 24 04:17 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT | 23 Sep 24 04:17 PDT |
| ssh     | functional-539000 ssh sudo                                               | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-539000                                                        | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-539000 ssh                                                    | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-539000 cache reload                                           | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT | 23 Sep 24 04:17 PDT |
| ssh     | functional-539000 ssh                                                    | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT | 23 Sep 24 04:17 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT | 23 Sep 24 04:17 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-539000 kubectl --                                             | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT |                     |
|         | --context functional-539000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-539000                                                     | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/09/23 04:17:25
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.23.0 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0923 04:17:25.603208   19271 out.go:345] Setting OutFile to fd 1 ...
I0923 04:17:25.603321   19271 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 04:17:25.603323   19271 out.go:358] Setting ErrFile to fd 2...
I0923 04:17:25.603324   19271 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 04:17:25.603450   19271 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
I0923 04:17:25.604492   19271 out.go:352] Setting JSON to false
I0923 04:17:25.620463   19271 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8216,"bootTime":1727082029,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0923 04:17:25.620529   19271 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0923 04:17:25.626708   19271 out.go:177] * [functional-539000] minikube v1.34.0 on Darwin 14.5 (arm64)
I0923 04:17:25.634700   19271 out.go:177]   - MINIKUBE_LOCATION=19690
I0923 04:17:25.634739   19271 notify.go:220] Checking for updates...
I0923 04:17:25.643589   19271 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
I0923 04:17:25.646521   19271 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0923 04:17:25.649624   19271 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0923 04:17:25.652671   19271 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
I0923 04:17:25.654027   19271 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0923 04:17:25.656911   19271 config.go:182] Loaded profile config "functional-539000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 04:17:25.656961   19271 driver.go:394] Setting default libvirt URI to qemu:///system
I0923 04:17:25.661624   19271 out.go:177] * Using the qemu2 driver based on existing profile
I0923 04:17:25.666592   19271 start.go:297] selected driver: qemu2
I0923 04:17:25.666596   19271 start.go:901] validating driver "qemu2" against &{Name:functional-539000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:functional-539000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0923 04:17:25.666664   19271 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0923 04:17:25.669103   19271 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0923 04:17:25.669127   19271 cni.go:84] Creating CNI manager for ""
I0923 04:17:25.669152   19271 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0923 04:17:25.669192   19271 start.go:340] cluster config:
{Name:functional-539000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-539000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0923 04:17:25.672848   19271 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0923 04:17:25.679509   19271 out.go:177] * Starting "functional-539000" primary control-plane node in "functional-539000" cluster
I0923 04:17:25.683581   19271 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0923 04:17:25.683595   19271 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I0923 04:17:25.683600   19271 cache.go:56] Caching tarball of preloaded images
I0923 04:17:25.683665   19271 preload.go:172] Found /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0923 04:17:25.683669   19271 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I0923 04:17:25.683713   19271 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/functional-539000/config.json ...
I0923 04:17:25.684191   19271 start.go:360] acquireMachinesLock for functional-539000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0923 04:17:25.684228   19271 start.go:364] duration metric: took 32.125µs to acquireMachinesLock for "functional-539000"
I0923 04:17:25.684236   19271 start.go:96] Skipping create...Using existing machine configuration
I0923 04:17:25.684238   19271 fix.go:54] fixHost starting: 
I0923 04:17:25.684361   19271 fix.go:112] recreateIfNeeded on functional-539000: state=Stopped err=<nil>
W0923 04:17:25.684368   19271 fix.go:138] unexpected machine state, will restart: <nil>
I0923 04:17:25.692623   19271 out.go:177] * Restarting existing qemu2 VM for "functional-539000" ...
I0923 04:17:25.696640   19271 qemu.go:418] Using hvf for hardware acceleration
I0923 04:17:25.696677   19271 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/functional-539000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/functional-539000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/functional-539000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:e2:79:50:c7:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/functional-539000/disk.qcow2
I0923 04:17:25.698790   19271 main.go:141] libmachine: STDOUT: 
I0923 04:17:25.698806   19271 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0923 04:17:25.698839   19271 fix.go:56] duration metric: took 14.597917ms for fixHost
I0923 04:17:25.698841   19271 start.go:83] releasing machines lock for "functional-539000", held for 14.610958ms
W0923 04:17:25.698847   19271 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0923 04:17:25.698891   19271 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0923 04:17:25.698895   19271 start.go:729] Will try again in 5 seconds ...
I0923 04:17:30.701050   19271 start.go:360] acquireMachinesLock for functional-539000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0923 04:17:30.701394   19271 start.go:364] duration metric: took 282.042µs to acquireMachinesLock for "functional-539000"
I0923 04:17:30.701483   19271 start.go:96] Skipping create...Using existing machine configuration
I0923 04:17:30.701496   19271 fix.go:54] fixHost starting: 
I0923 04:17:30.702185   19271 fix.go:112] recreateIfNeeded on functional-539000: state=Stopped err=<nil>
W0923 04:17:30.702204   19271 fix.go:138] unexpected machine state, will restart: <nil>
I0923 04:17:30.709620   19271 out.go:177] * Restarting existing qemu2 VM for "functional-539000" ...
I0923 04:17:30.712718   19271 qemu.go:418] Using hvf for hardware acceleration
I0923 04:17:30.712925   19271 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/functional-539000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/functional-539000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/functional-539000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:e2:79:50:c7:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/functional-539000/disk.qcow2
I0923 04:17:30.721950   19271 main.go:141] libmachine: STDOUT: 
I0923 04:17:30.722018   19271 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0923 04:17:30.722110   19271 fix.go:56] duration metric: took 20.612209ms for fixHost
I0923 04:17:30.722124   19271 start.go:83] releasing machines lock for "functional-539000", held for 20.712667ms
W0923 04:17:30.722313   19271 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-539000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0923 04:17:30.730618   19271 out.go:201] 
W0923 04:17:30.733684   19271 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0923 04:17:30.733719   19271 out.go:270] * 
W0923 04:17:30.735609   19271 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0923 04:17:30.742655   19271 out.go:201] 

                                                
                                                

                                                
                                                
* The control-plane node functional-539000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-539000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd3940364065/001/logs.txt
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-294000 | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
|         | -p download-only-294000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT | 23 Sep 24 04:16 PDT |
| delete  | -p download-only-294000                                                  | download-only-294000 | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT | 23 Sep 24 04:16 PDT |
| start   | -o=json --download-only                                                  | download-only-913000 | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
|         | -p download-only-913000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.1                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT | 23 Sep 24 04:16 PDT |
| delete  | -p download-only-913000                                                  | download-only-913000 | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT | 23 Sep 24 04:16 PDT |
| delete  | -p download-only-294000                                                  | download-only-294000 | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT | 23 Sep 24 04:16 PDT |
| delete  | -p download-only-913000                                                  | download-only-913000 | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT | 23 Sep 24 04:16 PDT |
| start   | --download-only -p                                                       | binary-mirror-684000 | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
|         | binary-mirror-684000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:53065                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-684000                                                  | binary-mirror-684000 | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT | 23 Sep 24 04:16 PDT |
| addons  | enable dashboard -p                                                      | addons-040000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
|         | addons-040000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-040000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
|         | addons-040000                                                            |                      |         |         |                     |                     |
| start   | -p addons-040000 --wait=true                                             | addons-040000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-040000                                                         | addons-040000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT | 23 Sep 24 04:16 PDT |
| start   | -p nospam-693000 -n=1 --memory=2250 --wait=false                         | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-693000 --log_dir                                                  | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-693000 --log_dir                                                  | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-693000 --log_dir                                                  | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-693000 --log_dir                                                  | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-693000 --log_dir                                                  | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-693000 --log_dir                                                  | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-693000 --log_dir                                                  | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-693000 --log_dir                                                  | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-693000 --log_dir                                                  | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-693000 --log_dir                                                  | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT | 23 Sep 24 04:16 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-693000 --log_dir                                                  | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT | 23 Sep 24 04:17 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-693000 --log_dir                                                  | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT | 23 Sep 24 04:17 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-693000                                                         | nospam-693000        | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT | 23 Sep 24 04:17 PDT |
| start   | -p functional-539000                                                     | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-539000                                                     | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-539000 cache add                                              | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT | 23 Sep 24 04:17 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-539000 cache add                                              | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT | 23 Sep 24 04:17 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-539000 cache add                                              | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT | 23 Sep 24 04:17 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-539000 cache add                                              | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT | 23 Sep 24 04:17 PDT |
|         | minikube-local-cache-test:functional-539000                              |                      |         |         |                     |                     |
| cache   | functional-539000 cache delete                                           | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT | 23 Sep 24 04:17 PDT |
|         | minikube-local-cache-test:functional-539000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT | 23 Sep 24 04:17 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT | 23 Sep 24 04:17 PDT |
| ssh     | functional-539000 ssh sudo                                               | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-539000                                                        | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-539000 ssh                                                    | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-539000 cache reload                                           | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT | 23 Sep 24 04:17 PDT |
| ssh     | functional-539000 ssh                                                    | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT | 23 Sep 24 04:17 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT | 23 Sep 24 04:17 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-539000 kubectl --                                             | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT |                     |
|         | --context functional-539000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-539000                                                     | functional-539000    | jenkins | v1.34.0 | 23 Sep 24 04:17 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/09/23 04:17:25
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.23.0 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0923 04:17:25.603208   19271 out.go:345] Setting OutFile to fd 1 ...
I0923 04:17:25.603321   19271 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 04:17:25.603323   19271 out.go:358] Setting ErrFile to fd 2...
I0923 04:17:25.603324   19271 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 04:17:25.603450   19271 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
I0923 04:17:25.604492   19271 out.go:352] Setting JSON to false
I0923 04:17:25.620463   19271 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8216,"bootTime":1727082029,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0923 04:17:25.620529   19271 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0923 04:17:25.626708   19271 out.go:177] * [functional-539000] minikube v1.34.0 on Darwin 14.5 (arm64)
I0923 04:17:25.634700   19271 out.go:177]   - MINIKUBE_LOCATION=19690
I0923 04:17:25.634739   19271 notify.go:220] Checking for updates...
I0923 04:17:25.643589   19271 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
I0923 04:17:25.646521   19271 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0923 04:17:25.649624   19271 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0923 04:17:25.652671   19271 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
I0923 04:17:25.654027   19271 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0923 04:17:25.656911   19271 config.go:182] Loaded profile config "functional-539000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 04:17:25.656961   19271 driver.go:394] Setting default libvirt URI to qemu:///system
I0923 04:17:25.661624   19271 out.go:177] * Using the qemu2 driver based on existing profile
I0923 04:17:25.666592   19271 start.go:297] selected driver: qemu2
I0923 04:17:25.666596   19271 start.go:901] validating driver "qemu2" against &{Name:functional-539000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:functional-539000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0923 04:17:25.666664   19271 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0923 04:17:25.669103   19271 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0923 04:17:25.669127   19271 cni.go:84] Creating CNI manager for ""
I0923 04:17:25.669152   19271 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0923 04:17:25.669192   19271 start.go:340] cluster config:
{Name:functional-539000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-539000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0923 04:17:25.672848   19271 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0923 04:17:25.679509   19271 out.go:177] * Starting "functional-539000" primary control-plane node in "functional-539000" cluster
I0923 04:17:25.683581   19271 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0923 04:17:25.683595   19271 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I0923 04:17:25.683600   19271 cache.go:56] Caching tarball of preloaded images
I0923 04:17:25.683665   19271 preload.go:172] Found /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0923 04:17:25.683669   19271 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I0923 04:17:25.683713   19271 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/functional-539000/config.json ...
I0923 04:17:25.684191   19271 start.go:360] acquireMachinesLock for functional-539000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0923 04:17:25.684228   19271 start.go:364] duration metric: took 32.125µs to acquireMachinesLock for "functional-539000"
I0923 04:17:25.684236   19271 start.go:96] Skipping create...Using existing machine configuration
I0923 04:17:25.684238   19271 fix.go:54] fixHost starting: 
I0923 04:17:25.684361   19271 fix.go:112] recreateIfNeeded on functional-539000: state=Stopped err=<nil>
W0923 04:17:25.684368   19271 fix.go:138] unexpected machine state, will restart: <nil>
I0923 04:17:25.692623   19271 out.go:177] * Restarting existing qemu2 VM for "functional-539000" ...
I0923 04:17:25.696640   19271 qemu.go:418] Using hvf for hardware acceleration
I0923 04:17:25.696677   19271 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/functional-539000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/functional-539000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/functional-539000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:e2:79:50:c7:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/functional-539000/disk.qcow2
I0923 04:17:25.698790   19271 main.go:141] libmachine: STDOUT: 
I0923 04:17:25.698806   19271 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0923 04:17:25.698839   19271 fix.go:56] duration metric: took 14.597917ms for fixHost
I0923 04:17:25.698841   19271 start.go:83] releasing machines lock for "functional-539000", held for 14.610958ms
W0923 04:17:25.698847   19271 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0923 04:17:25.698891   19271 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0923 04:17:25.698895   19271 start.go:729] Will try again in 5 seconds ...
I0923 04:17:30.701050   19271 start.go:360] acquireMachinesLock for functional-539000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0923 04:17:30.701394   19271 start.go:364] duration metric: took 282.042µs to acquireMachinesLock for "functional-539000"
I0923 04:17:30.701483   19271 start.go:96] Skipping create...Using existing machine configuration
I0923 04:17:30.701496   19271 fix.go:54] fixHost starting: 
I0923 04:17:30.702185   19271 fix.go:112] recreateIfNeeded on functional-539000: state=Stopped err=<nil>
W0923 04:17:30.702204   19271 fix.go:138] unexpected machine state, will restart: <nil>
I0923 04:17:30.709620   19271 out.go:177] * Restarting existing qemu2 VM for "functional-539000" ...
I0923 04:17:30.712718   19271 qemu.go:418] Using hvf for hardware acceleration
I0923 04:17:30.712925   19271 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/functional-539000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/functional-539000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/functional-539000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:e2:79:50:c7:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/functional-539000/disk.qcow2
I0923 04:17:30.721950   19271 main.go:141] libmachine: STDOUT: 
I0923 04:17:30.722018   19271 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0923 04:17:30.722110   19271 fix.go:56] duration metric: took 20.612209ms for fixHost
I0923 04:17:30.722124   19271 start.go:83] releasing machines lock for "functional-539000", held for 20.712667ms
W0923 04:17:30.722313   19271 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-539000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0923 04:17:30.730618   19271 out.go:201] 
W0923 04:17:30.733684   19271 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0923 04:17:30.733719   19271 out.go:270] * 
W0923 04:17:30.735609   19271 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0923 04:17:30.742655   19271 out.go:201] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-539000 apply -f testdata/invalidsvc.yaml
functional_test.go:2321: (dbg) Non-zero exit: kubectl --context functional-539000 apply -f testdata/invalidsvc.yaml: exit status 1 (29.004917ms)

                                                
                                                
** stderr ** 
	error: context "functional-539000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2323: kubectl --context functional-539000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-539000 --alsologtostderr -v=1]
functional_test.go:918: output didn't produce a URL
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-539000 --alsologtostderr -v=1] ...
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-539000 --alsologtostderr -v=1] stdout:
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-539000 --alsologtostderr -v=1] stderr:
I0923 04:18:06.540068   19612 out.go:345] Setting OutFile to fd 1 ...
I0923 04:18:06.540491   19612 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 04:18:06.540495   19612 out.go:358] Setting ErrFile to fd 2...
I0923 04:18:06.540498   19612 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 04:18:06.540662   19612 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
I0923 04:18:06.540880   19612 mustload.go:65] Loading cluster: functional-539000
I0923 04:18:06.541101   19612 config.go:182] Loaded profile config "functional-539000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 04:18:06.544131   19612 out.go:177] * The control-plane node functional-539000 host is not running: state=Stopped
I0923 04:18:06.548112   19612 out.go:177]   To start a cluster, run: "minikube start -p functional-539000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-539000 -n functional-539000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-539000 -n functional-539000: exit status 7 (42.406667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-539000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 status
functional_test.go:854: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 status: exit status 7 (30.002875ms)

                                                
                                                
-- stdout --
	functional-539000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:856: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-539000 status" : exit status 7
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:860: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (29.820042ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:862: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-539000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 status -o json
functional_test.go:872: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 status -o json: exit status 7 (29.777834ms)

                                                
                                                
-- stdout --
	{"Name":"functional-539000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:874: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-539000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-539000 -n functional-539000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-539000 -n functional-539000: exit status 7 (29.850125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-539000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-539000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1627: (dbg) Non-zero exit: kubectl --context functional-539000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.725083ms)

                                                
                                                
** stderr ** 
	error: context "functional-539000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1633: failed to create hello-node deployment with this command "kubectl --context functional-539000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-539000 describe po hello-node-connect
functional_test.go:1602: (dbg) Non-zero exit: kubectl --context functional-539000 describe po hello-node-connect: exit status 1 (26.14675ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-539000

                                                
                                                
** /stderr **
functional_test.go:1604: "kubectl --context functional-539000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1606: hello-node pod describe:
functional_test.go:1608: (dbg) Run:  kubectl --context functional-539000 logs -l app=hello-node-connect
functional_test.go:1608: (dbg) Non-zero exit: kubectl --context functional-539000 logs -l app=hello-node-connect: exit status 1 (25.630416ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-539000

                                                
                                                
** /stderr **
functional_test.go:1610: "kubectl --context functional-539000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1612: hello-node logs:
functional_test.go:1614: (dbg) Run:  kubectl --context functional-539000 describe svc hello-node-connect
functional_test.go:1614: (dbg) Non-zero exit: kubectl --context functional-539000 describe svc hello-node-connect: exit status 1 (25.855417ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-539000

                                                
                                                
** /stderr **
functional_test.go:1616: "kubectl --context functional-539000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1618: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-539000 -n functional-539000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-539000 -n functional-539000: exit status 7 (30.28475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-539000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-539000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-539000 -n functional-539000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-539000 -n functional-539000: exit status 7 (30.504708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-539000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 ssh "echo hello"
functional_test.go:1725: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 ssh "echo hello": exit status 83 (44.479041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
functional_test.go:1730: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-539000 ssh \"echo hello\"" : exit status 83
functional_test.go:1734: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-539000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-539000\"\n"*. args "out/minikube-darwin-arm64 -p functional-539000 ssh \"echo hello\""
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 ssh "cat /etc/hostname"
functional_test.go:1742: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 ssh "cat /etc/hostname": exit status 83 (43.964666ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
functional_test.go:1748: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-539000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1752: expected minikube ssh command output to be -"functional-539000"- but got *"* The control-plane node functional-539000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-539000\"\n"*. args "out/minikube-darwin-arm64 -p functional-539000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-539000 -n functional-539000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-539000 -n functional-539000: exit status 7 (30.850333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-539000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (54.475083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-539000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 ssh -n functional-539000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 ssh -n functional-539000 "sudo cat /home/docker/cp-test.txt": exit status 83 (40.311708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-539000 ssh -n functional-539000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-539000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-539000\"\n",
  }, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 cp functional-539000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd827802293/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 cp functional-539000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd827802293/001/cp-test.txt: exit status 83 (42.19075ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-539000 cp functional-539000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd827802293/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 ssh -n functional-539000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 ssh -n functional-539000 "sudo cat /home/docker/cp-test.txt": exit status 83 (40.745583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-539000 ssh -n functional-539000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd827802293/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"* The control-plane node functional-539000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-539000\"\n",
+ 	"",
  )
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (42.974459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-539000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 ssh -n functional-539000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 ssh -n functional-539000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (45.725167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-539000 ssh -n functional-539000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-539000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-539000\"\n",
  }, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/18914/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 ssh "sudo cat /etc/test/nested/copy/18914/hosts"
functional_test.go:1931: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 ssh "sudo cat /etc/test/nested/copy/18914/hosts": exit status 83 (45.360584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
functional_test.go:1933: out/minikube-darwin-arm64 -p functional-539000 ssh "sudo cat /etc/test/nested/copy/18914/hosts" failed: exit status 83
functional_test.go:1936: file sync test content: * The control-plane node functional-539000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-539000"
functional_test.go:1946: /etc/sync.test content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-539000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-539000\"\n",
  }, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-539000 -n functional-539000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-539000 -n functional-539000: exit status 7 (32.459208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-539000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/18914.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 ssh "sudo cat /etc/ssl/certs/18914.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 ssh "sudo cat /etc/ssl/certs/18914.pem": exit status 83 (41.502583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/18914.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-539000 ssh \"sudo cat /etc/ssl/certs/18914.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/18914.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-539000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-539000"
  	"""
  )
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/18914.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 ssh "sudo cat /usr/share/ca-certificates/18914.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 ssh "sudo cat /usr/share/ca-certificates/18914.pem": exit status 83 (41.113208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/usr/share/ca-certificates/18914.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-539000 ssh \"sudo cat /usr/share/ca-certificates/18914.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/18914.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-539000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-539000"
  	"""
  )
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (43.6365ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-539000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-539000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-539000"
  	"""
  )
functional_test.go:1999: Checking for existence of /etc/ssl/certs/189142.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 ssh "sudo cat /etc/ssl/certs/189142.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 ssh "sudo cat /etc/ssl/certs/189142.pem": exit status 83 (40.7015ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/189142.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-539000 ssh \"sudo cat /etc/ssl/certs/189142.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/189142.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-539000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-539000"
  	"""
  )
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/189142.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 ssh "sudo cat /usr/share/ca-certificates/189142.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 ssh "sudo cat /usr/share/ca-certificates/189142.pem": exit status 83 (40.622208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/usr/share/ca-certificates/189142.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-539000 ssh \"sudo cat /usr/share/ca-certificates/189142.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/189142.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-539000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-539000"
  	"""
  )
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (42.381208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-539000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-539000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-539000"
  	"""
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-539000 -n functional-539000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-539000 -n functional-539000: exit status 7 (31.431083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-539000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-539000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:219: (dbg) Non-zero exit: kubectl --context functional-539000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.4165ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-539000

                                                
                                                
** /stderr **
functional_test.go:221: failed to 'kubectl get nodes' with args "kubectl --context functional-539000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:227: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-539000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-539000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-539000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-539000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-539000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-539000 -n functional-539000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-539000 -n functional-539000: exit status 7 (31.6175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-539000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 ssh "sudo systemctl is-active crio": exit status 83 (38.636583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
functional_test.go:2030: output of 
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2033: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-539000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-539000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 version -o=json --components
functional_test.go:2270: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 version -o=json --components: exit status 83 (41.95875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
functional_test.go:2272: error version: exit status 83
functional_test.go:2277: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-539000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-539000"
functional_test.go:2277: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-539000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-539000"
functional_test.go:2277: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-539000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-539000"
functional_test.go:2277: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-539000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-539000"
functional_test.go:2277: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-539000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-539000"
functional_test.go:2277: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-539000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-539000"
functional_test.go:2277: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-539000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-539000"
functional_test.go:2277: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-539000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-539000"
functional_test.go:2277: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-539000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-539000"
functional_test.go:2277: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-539000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-539000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-539000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-539000 image ls --format short --alsologtostderr:
I0923 04:18:06.944992   19627 out.go:345] Setting OutFile to fd 1 ...
I0923 04:18:06.945129   19627 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 04:18:06.945132   19627 out.go:358] Setting ErrFile to fd 2...
I0923 04:18:06.945134   19627 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 04:18:06.945241   19627 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
I0923 04:18:06.945659   19627 config.go:182] Loaded profile config "functional-539000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 04:18:06.945720   19627 config.go:182] Loaded profile config "functional-539000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:275: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-539000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-539000 image ls --format table --alsologtostderr:
I0923 04:18:07.171581   19639 out.go:345] Setting OutFile to fd 1 ...
I0923 04:18:07.171745   19639 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 04:18:07.171748   19639 out.go:358] Setting ErrFile to fd 2...
I0923 04:18:07.171750   19639 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 04:18:07.171899   19639 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
I0923 04:18:07.172302   19639 config.go:182] Loaded profile config "functional-539000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 04:18:07.172365   19639 config.go:182] Loaded profile config "functional-539000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:275: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
I0923 04:18:10.289031   18914 retry.go:31] will retry after 20.403013548s: Temporary Error: Get "http:": http: no Host in request URL
I0923 04:18:30.694353   18914 retry.go:31] will retry after 49.566252892s: Temporary Error: Get "http:": http: no Host in request URL
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-539000 image ls --format json --alsologtostderr:
[]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-539000 image ls --format json --alsologtostderr:
I0923 04:18:07.135471   19637 out.go:345] Setting OutFile to fd 1 ...
I0923 04:18:07.135606   19637 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 04:18:07.135610   19637 out.go:358] Setting ErrFile to fd 2...
I0923 04:18:07.135612   19637 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 04:18:07.135756   19637 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
I0923 04:18:07.136185   19637 config.go:182] Loaded profile config "functional-539000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 04:18:07.136248   19637 config.go:182] Loaded profile config "functional-539000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:275: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-539000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-539000 image ls --format yaml --alsologtostderr:
I0923 04:18:06.981305   19629 out.go:345] Setting OutFile to fd 1 ...
I0923 04:18:06.981455   19629 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 04:18:06.981458   19629 out.go:358] Setting ErrFile to fd 2...
I0923 04:18:06.981460   19629 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 04:18:06.981608   19629 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
I0923 04:18:06.982045   19629 config.go:182] Loaded profile config "functional-539000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 04:18:06.982104   19629 config.go:182] Loaded profile config "functional-539000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:275: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 ssh pgrep buildkitd: exit status 83 (43.702833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 image build -t localhost/my-image:functional-539000 testdata/build --alsologtostderr
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-539000 image build -t localhost/my-image:functional-539000 testdata/build --alsologtostderr:
I0923 04:18:07.061935   19633 out.go:345] Setting OutFile to fd 1 ...
I0923 04:18:07.062321   19633 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 04:18:07.062325   19633 out.go:358] Setting ErrFile to fd 2...
I0923 04:18:07.062327   19633 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 04:18:07.062511   19633 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
I0923 04:18:07.062944   19633 config.go:182] Loaded profile config "functional-539000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 04:18:07.063398   19633 config.go:182] Loaded profile config "functional-539000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 04:18:07.063647   19633 build_images.go:133] succeeded building to: 
I0923 04:18:07.063651   19633 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 image ls
functional_test.go:446: expected "localhost/my-image:functional-539000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-539000 docker-env) && out/minikube-darwin-arm64 status -p functional-539000"
functional_test.go:499: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-539000 docker-env) && out/minikube-darwin-arm64 status -p functional-539000": exit status 1 (45.146708ms)
functional_test.go:505: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 update-context --alsologtostderr -v=2: exit status 83 (42.75025ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:18:06.814866   19621 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:18:06.815846   19621 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:18:06.815853   19621 out.go:358] Setting ErrFile to fd 2...
	I0923 04:18:06.815855   19621 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:18:06.816060   19621 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:18:06.816276   19621 mustload.go:65] Loading cluster: functional-539000
	I0923 04:18:06.816483   19621 config.go:182] Loaded profile config "functional-539000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:18:06.820983   19621 out.go:177] * The control-plane node functional-539000 host is not running: state=Stopped
	I0923 04:18:06.824014   19621 out.go:177]   To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-539000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-539000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-539000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 update-context --alsologtostderr -v=2: exit status 83 (42.384458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:18:06.902290   19625 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:18:06.902453   19625 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:18:06.902456   19625 out.go:358] Setting ErrFile to fd 2...
	I0923 04:18:06.902459   19625 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:18:06.902583   19625 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:18:06.902813   19625 mustload.go:65] Loading cluster: functional-539000
	I0923 04:18:06.903041   19625 config.go:182] Loaded profile config "functional-539000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:18:06.908053   19625 out.go:177] * The control-plane node functional-539000 host is not running: state=Stopped
	I0923 04:18:06.910876   19625 out.go:177]   To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-539000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-539000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-539000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 update-context --alsologtostderr -v=2: exit status 83 (43.61525ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:18:06.858225   19623 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:18:06.858355   19623 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:18:06.858359   19623 out.go:358] Setting ErrFile to fd 2...
	I0923 04:18:06.858361   19623 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:18:06.858489   19623 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:18:06.858713   19623 mustload.go:65] Loading cluster: functional-539000
	I0923 04:18:06.858904   19623 config.go:182] Loaded profile config "functional-539000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:18:06.863962   19623 out.go:177] * The control-plane node functional-539000 host is not running: state=Stopped
	I0923 04:18:06.867962   19623 out.go:177]   To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-539000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-539000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-539000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-539000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1437: (dbg) Non-zero exit: kubectl --context functional-539000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.846417ms)

                                                
                                                
** stderr ** 
	error: context "functional-539000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1443: failed to create hello-node deployment with this command "kubectl --context functional-539000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 service list
functional_test.go:1459: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 service list: exit status 83 (40.733291ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
functional_test.go:1461: failed to do service list. args "out/minikube-darwin-arm64 -p functional-539000 service list" : exit status 83
functional_test.go:1464: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-539000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-539000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 service list -o json
functional_test.go:1489: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 service list -o json: exit status 83 (46.867417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
functional_test.go:1491: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-539000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 service --namespace=default --https --url hello-node
functional_test.go:1509: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 service --namespace=default --https --url hello-node: exit status 83 (42.82425ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
functional_test.go:1511: failed to get service url. args "out/minikube-darwin-arm64 -p functional-539000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 service hello-node --url --format={{.IP}}
functional_test.go:1540: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 service hello-node --url --format={{.IP}}: exit status 83 (41.566ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
functional_test.go:1542: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-539000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1548: "* The control-plane node functional-539000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-539000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 service hello-node --url
functional_test.go:1559: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 service hello-node --url: exit status 83 (41.995458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
functional_test.go:1561: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-539000 service hello-node --url": exit status 83
functional_test.go:1565: found endpoint for hello-node: * The control-plane node functional-539000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-539000"
functional_test.go:1569: failed to parse "* The control-plane node functional-539000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-539000\"": parse "* The control-plane node functional-539000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-539000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-539000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-539000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0923 04:17:32.658107   19392 out.go:345] Setting OutFile to fd 1 ...
I0923 04:17:32.658281   19392 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 04:17:32.658284   19392 out.go:358] Setting ErrFile to fd 2...
I0923 04:17:32.658287   19392 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 04:17:32.658427   19392 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
I0923 04:17:32.658663   19392 mustload.go:65] Loading cluster: functional-539000
I0923 04:17:32.658901   19392 config.go:182] Loaded profile config "functional-539000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 04:17:32.663623   19392 out.go:177] * The control-plane node functional-539000 host is not running: state=Stopped
I0923 04:17:32.672700   19392 out.go:177]   To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
stdout: * The control-plane node functional-539000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-539000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-539000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-539000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-539000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-539000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 19391: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-539000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-539000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-539000": client config: context "functional-539000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (107.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I0923 04:17:32.727871   18914 retry.go:31] will retry after 1.968706551s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-539000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-539000 get svc nginx-svc: exit status 1 (69.305125ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-539000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-539000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (107.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 image load --daemon kicbase/echo-server:functional-539000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-539000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 image load --daemon kicbase/echo-server:functional-539000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-539000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-539000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 image load --daemon kicbase/echo-server:functional-539000 --alsologtostderr
I0923 04:17:34.698779   18914 retry.go:31] will retry after 4.98996535s: Temporary Error: Get "http:": http: no Host in request URL
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-539000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 image save kicbase/echo-server:functional-539000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:386: expected "/Users/jenkins/workspace/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-539000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I0923 04:19:20.345494   18914 config.go:182] Loaded profile config "functional-539000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.036047292s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 14 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (30.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I0923 04:19:45.480214   18914 config.go:182] Loaded profile config "functional-539000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 04:19:55.482565   18914 retry.go:31] will retry after 2.620146804s: Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0923 04:20:08.107327   18914 retry.go:31] will retry after 2.723372754s: Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": dial tcp: lookup nginx-svc.default.svc.cluster.local. on 8.8.8.8:53: read udp 207.254.73.72:50102->10.96.0.10:53: i/o timeout
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (30.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (9.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-576000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-576000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.837429875s)

                                                
                                                
-- stdout --
	* [ha-576000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-576000" primary control-plane node in "ha-576000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-576000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:20:15.875480   19751 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:20:15.875596   19751 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:20:15.875600   19751 out.go:358] Setting ErrFile to fd 2...
	I0923 04:20:15.875602   19751 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:20:15.875741   19751 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:20:15.876806   19751 out.go:352] Setting JSON to false
	I0923 04:20:15.893086   19751 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8386,"bootTime":1727082029,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:20:15.893194   19751 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:20:15.898547   19751 out.go:177] * [ha-576000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:20:15.906930   19751 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 04:20:15.906972   19751 notify.go:220] Checking for updates...
	I0923 04:20:15.916068   19751 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:20:15.919773   19751 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:20:15.922724   19751 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:20:15.925702   19751 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	I0923 04:20:15.928681   19751 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 04:20:15.931793   19751 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:20:15.935700   19751 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 04:20:15.941591   19751 start.go:297] selected driver: qemu2
	I0923 04:20:15.941597   19751 start.go:901] validating driver "qemu2" against <nil>
	I0923 04:20:15.941605   19751 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 04:20:15.944117   19751 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 04:20:15.946690   19751 out.go:177] * Automatically selected the socket_vmnet network
	I0923 04:20:15.949744   19751 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 04:20:15.949761   19751 cni.go:84] Creating CNI manager for ""
	I0923 04:20:15.949785   19751 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0923 04:20:15.949790   19751 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 04:20:15.949824   19751 start.go:340] cluster config:
	{Name:ha-576000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-576000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:20:15.953651   19751 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:20:15.962622   19751 out.go:177] * Starting "ha-576000" primary control-plane node in "ha-576000" cluster
	I0923 04:20:15.965637   19751 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 04:20:15.965655   19751 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 04:20:15.965662   19751 cache.go:56] Caching tarball of preloaded images
	I0923 04:20:15.965733   19751 preload.go:172] Found /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 04:20:15.965741   19751 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 04:20:15.965958   19751 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/ha-576000/config.json ...
	I0923 04:20:15.965973   19751 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/ha-576000/config.json: {Name:mkbe3ff2b596ee25f75da5f31464e809aa34e793 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 04:20:15.966197   19751 start.go:360] acquireMachinesLock for ha-576000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:20:15.966231   19751 start.go:364] duration metric: took 28.583µs to acquireMachinesLock for "ha-576000"
	I0923 04:20:15.966247   19751 start.go:93] Provisioning new machine with config: &{Name:ha-576000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-576000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:20:15.966286   19751 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:20:15.974586   19751 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 04:20:15.992221   19751 start.go:159] libmachine.API.Create for "ha-576000" (driver="qemu2")
	I0923 04:20:15.992250   19751 client.go:168] LocalClient.Create starting
	I0923 04:20:15.992307   19751 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:20:15.992335   19751 main.go:141] libmachine: Decoding PEM data...
	I0923 04:20:15.992346   19751 main.go:141] libmachine: Parsing certificate...
	I0923 04:20:15.992391   19751 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:20:15.992414   19751 main.go:141] libmachine: Decoding PEM data...
	I0923 04:20:15.992422   19751 main.go:141] libmachine: Parsing certificate...
	I0923 04:20:15.992856   19751 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:20:16.159961   19751 main.go:141] libmachine: Creating SSH key...
	I0923 04:20:16.237792   19751 main.go:141] libmachine: Creating Disk image...
	I0923 04:20:16.237801   19751 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:20:16.238026   19751 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/ha-576000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/ha-576000/disk.qcow2
	I0923 04:20:16.247314   19751 main.go:141] libmachine: STDOUT: 
	I0923 04:20:16.247339   19751 main.go:141] libmachine: STDERR: 
	I0923 04:20:16.247407   19751 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/ha-576000/disk.qcow2 +20000M
	I0923 04:20:16.255237   19751 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:20:16.255251   19751 main.go:141] libmachine: STDERR: 
	I0923 04:20:16.255276   19751 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/ha-576000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/ha-576000/disk.qcow2
	I0923 04:20:16.255280   19751 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:20:16.255290   19751 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:20:16.255315   19751 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/ha-576000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/ha-576000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/ha-576000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:02:72:b7:d1:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/ha-576000/disk.qcow2
	I0923 04:20:16.256973   19751 main.go:141] libmachine: STDOUT: 
	I0923 04:20:16.256986   19751 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:20:16.257004   19751 client.go:171] duration metric: took 264.750708ms to LocalClient.Create
	I0923 04:20:18.259206   19751 start.go:128] duration metric: took 2.292898667s to createHost
	I0923 04:20:18.259453   19751 start.go:83] releasing machines lock for "ha-576000", held for 2.293080875s
	W0923 04:20:18.259524   19751 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:20:18.276505   19751 out.go:177] * Deleting "ha-576000" in qemu2 ...
	W0923 04:20:18.305551   19751 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:20:18.305576   19751 start.go:729] Will try again in 5 seconds ...
	I0923 04:20:23.307807   19751 start.go:360] acquireMachinesLock for ha-576000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:20:23.308275   19751 start.go:364] duration metric: took 362.917µs to acquireMachinesLock for "ha-576000"
	I0923 04:20:23.308434   19751 start.go:93] Provisioning new machine with config: &{Name:ha-576000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-576000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:20:23.308738   19751 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:20:23.329538   19751 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 04:20:23.380730   19751 start.go:159] libmachine.API.Create for "ha-576000" (driver="qemu2")
	I0923 04:20:23.380776   19751 client.go:168] LocalClient.Create starting
	I0923 04:20:23.380904   19751 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:20:23.380967   19751 main.go:141] libmachine: Decoding PEM data...
	I0923 04:20:23.380983   19751 main.go:141] libmachine: Parsing certificate...
	I0923 04:20:23.381053   19751 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:20:23.381097   19751 main.go:141] libmachine: Decoding PEM data...
	I0923 04:20:23.381110   19751 main.go:141] libmachine: Parsing certificate...
	I0923 04:20:23.381633   19751 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:20:23.557617   19751 main.go:141] libmachine: Creating SSH key...
	I0923 04:20:23.613715   19751 main.go:141] libmachine: Creating Disk image...
	I0923 04:20:23.613721   19751 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:20:23.613937   19751 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/ha-576000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/ha-576000/disk.qcow2
	I0923 04:20:23.623149   19751 main.go:141] libmachine: STDOUT: 
	I0923 04:20:23.623168   19751 main.go:141] libmachine: STDERR: 
	I0923 04:20:23.623238   19751 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/ha-576000/disk.qcow2 +20000M
	I0923 04:20:23.631054   19751 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:20:23.631067   19751 main.go:141] libmachine: STDERR: 
	I0923 04:20:23.631084   19751 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/ha-576000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/ha-576000/disk.qcow2
	I0923 04:20:23.631088   19751 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:20:23.631094   19751 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:20:23.631125   19751 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/ha-576000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/ha-576000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/ha-576000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:58:af:99:d8:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/ha-576000/disk.qcow2
	I0923 04:20:23.632785   19751 main.go:141] libmachine: STDOUT: 
	I0923 04:20:23.632801   19751 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:20:23.632814   19751 client.go:171] duration metric: took 252.032625ms to LocalClient.Create
	I0923 04:20:25.634974   19751 start.go:128] duration metric: took 2.326203917s to createHost
	I0923 04:20:25.635089   19751 start.go:83] releasing machines lock for "ha-576000", held for 2.326756667s
	W0923 04:20:25.635387   19751 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-576000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-576000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:20:25.651082   19751 out.go:201] 
	W0923 04:20:25.656232   19751 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:20:25.656259   19751 out.go:270] * 
	* 
	W0923 04:20:25.658635   19751 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:20:25.669067   19751 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-576000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-576000 -n ha-576000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-576000 -n ha-576000: exit status 7 (68.019375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-576000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (9.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (109.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-576000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-576000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (60.636375ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-576000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-576000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-576000 -- rollout status deployment/busybox: exit status 1 (58.308125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-576000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-576000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-576000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (58.411166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-576000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 04:20:25.931823   18914 retry.go:31] will retry after 1.342628944s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-576000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-576000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.788375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-576000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 04:20:27.382676   18914 retry.go:31] will retry after 1.853960916s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-576000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-576000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.258125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-576000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 04:20:29.343292   18914 retry.go:31] will retry after 1.497948642s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-576000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-576000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.483917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-576000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 04:20:30.949052   18914 retry.go:31] will retry after 3.595656675s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-576000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-576000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.494292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-576000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 04:20:34.651545   18914 retry.go:31] will retry after 5.069602784s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-576000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-576000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.401625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-576000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 04:20:39.827950   18914 retry.go:31] will retry after 10.156817845s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-576000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-576000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.795584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-576000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 04:20:50.093952   18914 retry.go:31] will retry after 7.271063079s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-576000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-576000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.270833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-576000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 04:20:57.472743   18914 retry.go:31] will retry after 9.620070376s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-576000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-576000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.991666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-576000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 04:21:07.200232   18914 retry.go:31] will retry after 16.202701146s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-576000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-576000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.64325ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-576000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 04:21:23.508559   18914 retry.go:31] will retry after 51.701089077s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-576000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-576000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.918209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-576000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-576000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-576000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.891916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-576000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-576000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-576000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.956625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-576000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-576000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-576000 -- exec  -- nslookup kubernetes.default: exit status 1 (57.984416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-576000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-576000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-576000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.779833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-576000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-576000 -n ha-576000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-576000 -n ha-576000: exit status 7 (30.618708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-576000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (109.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-576000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-576000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.995917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-576000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-576000 -n ha-576000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-576000 -n ha-576000: exit status 7 (30.585667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-576000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-576000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-576000 -v=7 --alsologtostderr: exit status 83 (41.846583ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-576000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-576000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:22:15.697408   19888 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:22:15.697941   19888 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:22:15.697945   19888 out.go:358] Setting ErrFile to fd 2...
	I0923 04:22:15.697947   19888 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:22:15.698116   19888 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:22:15.698347   19888 mustload.go:65] Loading cluster: ha-576000
	I0923 04:22:15.698562   19888 config.go:182] Loaded profile config "ha-576000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:22:15.702454   19888 out.go:177] * The control-plane node ha-576000 host is not running: state=Stopped
	I0923 04:22:15.706282   19888 out.go:177]   To start a cluster, run: "minikube start -p ha-576000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-576000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-576000 -n ha-576000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-576000 -n ha-576000: exit status 7 (30.439709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-576000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-576000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-576000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.172625ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-576000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-576000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-576000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-576000 -n ha-576000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-576000 -n ha-576000: exit status 7 (31.351875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-576000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-576000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-576000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-576000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-576000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-576000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-576000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-576000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-576000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-576000 -n ha-576000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-576000 -n ha-576000: exit status 7 (30.957042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-576000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-576000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-576000 status --output json -v=7 --alsologtostderr: exit status 7 (30.830292ms)

                                                
                                                
-- stdout --
	{"Name":"ha-576000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:22:15.907425   19900 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:22:15.907583   19900 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:22:15.907586   19900 out.go:358] Setting ErrFile to fd 2...
	I0923 04:22:15.907589   19900 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:22:15.907728   19900 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:22:15.907851   19900 out.go:352] Setting JSON to true
	I0923 04:22:15.907869   19900 mustload.go:65] Loading cluster: ha-576000
	I0923 04:22:15.907923   19900 notify.go:220] Checking for updates...
	I0923 04:22:15.908074   19900 config.go:182] Loaded profile config "ha-576000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:22:15.908083   19900 status.go:174] checking status of ha-576000 ...
	I0923 04:22:15.908323   19900 status.go:364] ha-576000 host status = "Stopped" (err=<nil>)
	I0923 04:22:15.908327   19900 status.go:377] host is not running, skipping remaining checks
	I0923 04:22:15.908330   19900 status.go:176] ha-576000 status: &{Name:ha-576000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-576000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-576000 -n ha-576000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-576000 -n ha-576000: exit status 7 (30.729083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-576000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-576000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-576000 node stop m02 -v=7 --alsologtostderr: exit status 85 (49.338208ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:22:15.969962   19904 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:22:15.970542   19904 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:22:15.970545   19904 out.go:358] Setting ErrFile to fd 2...
	I0923 04:22:15.970548   19904 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:22:15.970705   19904 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:22:15.970971   19904 mustload.go:65] Loading cluster: ha-576000
	I0923 04:22:15.971172   19904 config.go:182] Loaded profile config "ha-576000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:22:15.975501   19904 out.go:201] 
	W0923 04:22:15.979575   19904 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0923 04:22:15.979585   19904 out.go:270] * 
	* 
	W0923 04:22:15.981751   19904 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:22:15.985565   19904 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-576000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-576000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-576000 status -v=7 --alsologtostderr: exit status 7 (31.139459ms)

                                                
                                                
-- stdout --
	ha-576000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:22:16.019949   19906 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:22:16.020112   19906 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:22:16.020115   19906 out.go:358] Setting ErrFile to fd 2...
	I0923 04:22:16.020117   19906 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:22:16.020250   19906 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:22:16.020378   19906 out.go:352] Setting JSON to false
	I0923 04:22:16.020388   19906 mustload.go:65] Loading cluster: ha-576000
	I0923 04:22:16.020438   19906 notify.go:220] Checking for updates...
	I0923 04:22:16.020614   19906 config.go:182] Loaded profile config "ha-576000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:22:16.020621   19906 status.go:174] checking status of ha-576000 ...
	I0923 04:22:16.020861   19906 status.go:364] ha-576000 host status = "Stopped" (err=<nil>)
	I0923 04:22:16.020864   19906 status.go:377] host is not running, skipping remaining checks
	I0923 04:22:16.020866   19906 status.go:176] ha-576000 status: &{Name:ha-576000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-576000 status -v=7 --alsologtostderr": ha-576000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-576000 status -v=7 --alsologtostderr": ha-576000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-576000 status -v=7 --alsologtostderr": ha-576000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-576000 status -v=7 --alsologtostderr": ha-576000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-576000 -n ha-576000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-576000 -n ha-576000: exit status 7 (30.578458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-576000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-576000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-576000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-576000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-576000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-576000 -n ha-576000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-576000 -n ha-576000: exit status 7 (30.437333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-576000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (60.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-576000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-576000 node start m02 -v=7 --alsologtostderr: exit status 85 (50.284833ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:22:16.161117   19915 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:22:16.161945   19915 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:22:16.161949   19915 out.go:358] Setting ErrFile to fd 2...
	I0923 04:22:16.161951   19915 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:22:16.162123   19915 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:22:16.162343   19915 mustload.go:65] Loading cluster: ha-576000
	I0923 04:22:16.162554   19915 config.go:182] Loaded profile config "ha-576000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:22:16.166574   19915 out.go:201] 
	W0923 04:22:16.170548   19915 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0923 04:22:16.170554   19915 out.go:270] * 
	* 
	W0923 04:22:16.172819   19915 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:22:16.177553   19915 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:422: I0923 04:22:16.161117   19915 out.go:345] Setting OutFile to fd 1 ...
I0923 04:22:16.161945   19915 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 04:22:16.161949   19915 out.go:358] Setting ErrFile to fd 2...
I0923 04:22:16.161951   19915 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 04:22:16.162123   19915 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
I0923 04:22:16.162343   19915 mustload.go:65] Loading cluster: ha-576000
I0923 04:22:16.162554   19915 config.go:182] Loaded profile config "ha-576000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 04:22:16.166574   19915 out.go:201] 
W0923 04:22:16.170548   19915 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0923 04:22:16.170554   19915 out.go:270] * 
* 
W0923 04:22:16.172819   19915 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0923 04:22:16.177553   19915 out.go:201] 

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-576000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-576000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-576000 status -v=7 --alsologtostderr: exit status 7 (30.558375ms)

                                                
                                                
-- stdout --
	ha-576000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:22:16.211352   19917 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:22:16.211486   19917 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:22:16.211490   19917 out.go:358] Setting ErrFile to fd 2...
	I0923 04:22:16.211492   19917 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:22:16.211646   19917 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:22:16.211767   19917 out.go:352] Setting JSON to false
	I0923 04:22:16.211780   19917 mustload.go:65] Loading cluster: ha-576000
	I0923 04:22:16.211845   19917 notify.go:220] Checking for updates...
	I0923 04:22:16.212021   19917 config.go:182] Loaded profile config "ha-576000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:22:16.212032   19917 status.go:174] checking status of ha-576000 ...
	I0923 04:22:16.212282   19917 status.go:364] ha-576000 host status = "Stopped" (err=<nil>)
	I0923 04:22:16.212286   19917 status.go:377] host is not running, skipping remaining checks
	I0923 04:22:16.212288   19917 status.go:176] ha-576000 status: &{Name:ha-576000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0923 04:22:16.213144   18914 retry.go:31] will retry after 914.878153ms: exit status 7
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-576000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-576000 status -v=7 --alsologtostderr: exit status 7 (73.621875ms)

                                                
                                                
-- stdout --
	ha-576000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:22:17.201683   19921 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:22:17.201853   19921 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:22:17.201857   19921 out.go:358] Setting ErrFile to fd 2...
	I0923 04:22:17.201860   19921 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:22:17.202050   19921 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:22:17.202208   19921 out.go:352] Setting JSON to false
	I0923 04:22:17.202221   19921 mustload.go:65] Loading cluster: ha-576000
	I0923 04:22:17.202268   19921 notify.go:220] Checking for updates...
	I0923 04:22:17.202523   19921 config.go:182] Loaded profile config "ha-576000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:22:17.202533   19921 status.go:174] checking status of ha-576000 ...
	I0923 04:22:17.202843   19921 status.go:364] ha-576000 host status = "Stopped" (err=<nil>)
	I0923 04:22:17.202848   19921 status.go:377] host is not running, skipping remaining checks
	I0923 04:22:17.202851   19921 status.go:176] ha-576000 status: &{Name:ha-576000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0923 04:22:17.203976   18914 retry.go:31] will retry after 1.359675024s: exit status 7
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-576000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-576000 status -v=7 --alsologtostderr: exit status 7 (74.537208ms)

                                                
                                                
-- stdout --
	ha-576000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:22:18.638310   19923 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:22:18.638532   19923 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:22:18.638537   19923 out.go:358] Setting ErrFile to fd 2...
	I0923 04:22:18.638541   19923 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:22:18.638706   19923 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:22:18.638877   19923 out.go:352] Setting JSON to false
	I0923 04:22:18.638891   19923 mustload.go:65] Loading cluster: ha-576000
	I0923 04:22:18.638933   19923 notify.go:220] Checking for updates...
	I0923 04:22:18.639177   19923 config.go:182] Loaded profile config "ha-576000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:22:18.639189   19923 status.go:174] checking status of ha-576000 ...
	I0923 04:22:18.639501   19923 status.go:364] ha-576000 host status = "Stopped" (err=<nil>)
	I0923 04:22:18.639506   19923 status.go:377] host is not running, skipping remaining checks
	I0923 04:22:18.639509   19923 status.go:176] ha-576000 status: &{Name:ha-576000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0923 04:22:18.640564   18914 retry.go:31] will retry after 2.558949146s: exit status 7
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-576000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-576000 status -v=7 --alsologtostderr: exit status 7 (73.919ms)

                                                
                                                
-- stdout --
	ha-576000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:22:21.273659   19925 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:22:21.273867   19925 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:22:21.273871   19925 out.go:358] Setting ErrFile to fd 2...
	I0923 04:22:21.273874   19925 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:22:21.274049   19925 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:22:21.274198   19925 out.go:352] Setting JSON to false
	I0923 04:22:21.274212   19925 mustload.go:65] Loading cluster: ha-576000
	I0923 04:22:21.274260   19925 notify.go:220] Checking for updates...
	I0923 04:22:21.274505   19925 config.go:182] Loaded profile config "ha-576000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:22:21.274515   19925 status.go:174] checking status of ha-576000 ...
	I0923 04:22:21.274827   19925 status.go:364] ha-576000 host status = "Stopped" (err=<nil>)
	I0923 04:22:21.274832   19925 status.go:377] host is not running, skipping remaining checks
	I0923 04:22:21.274835   19925 status.go:176] ha-576000 status: &{Name:ha-576000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0923 04:22:21.275842   18914 retry.go:31] will retry after 3.590518088s: exit status 7
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-576000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-576000 status -v=7 --alsologtostderr: exit status 7 (75.274875ms)

                                                
                                                
-- stdout --
	ha-576000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:22:24.941794   19929 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:22:24.941996   19929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:22:24.942001   19929 out.go:358] Setting ErrFile to fd 2...
	I0923 04:22:24.942004   19929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:22:24.942196   19929 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:22:24.942359   19929 out.go:352] Setting JSON to false
	I0923 04:22:24.942376   19929 mustload.go:65] Loading cluster: ha-576000
	I0923 04:22:24.942426   19929 notify.go:220] Checking for updates...
	I0923 04:22:24.942683   19929 config.go:182] Loaded profile config "ha-576000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:22:24.942703   19929 status.go:174] checking status of ha-576000 ...
	I0923 04:22:24.943009   19929 status.go:364] ha-576000 host status = "Stopped" (err=<nil>)
	I0923 04:22:24.943014   19929 status.go:377] host is not running, skipping remaining checks
	I0923 04:22:24.943016   19929 status.go:176] ha-576000 status: &{Name:ha-576000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0923 04:22:24.944075   18914 retry.go:31] will retry after 2.545780991s: exit status 7
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-576000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-576000 status -v=7 --alsologtostderr: exit status 7 (74.559292ms)

                                                
                                                
-- stdout --
	ha-576000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:22:27.564490   19931 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:22:27.564644   19931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:22:27.564649   19931 out.go:358] Setting ErrFile to fd 2...
	I0923 04:22:27.564652   19931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:22:27.564846   19931 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:22:27.565001   19931 out.go:352] Setting JSON to false
	I0923 04:22:27.565015   19931 mustload.go:65] Loading cluster: ha-576000
	I0923 04:22:27.565060   19931 notify.go:220] Checking for updates...
	I0923 04:22:27.565289   19931 config.go:182] Loaded profile config "ha-576000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:22:27.565300   19931 status.go:174] checking status of ha-576000 ...
	I0923 04:22:27.565621   19931 status.go:364] ha-576000 host status = "Stopped" (err=<nil>)
	I0923 04:22:27.565626   19931 status.go:377] host is not running, skipping remaining checks
	I0923 04:22:27.565628   19931 status.go:176] ha-576000 status: &{Name:ha-576000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0923 04:22:27.566832   18914 retry.go:31] will retry after 6.489191999s: exit status 7
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-576000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-576000 status -v=7 --alsologtostderr: exit status 7 (75.111625ms)

                                                
                                                
-- stdout --
	ha-576000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:22:34.131268   19937 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:22:34.131492   19937 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:22:34.131497   19937 out.go:358] Setting ErrFile to fd 2...
	I0923 04:22:34.131501   19937 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:22:34.131689   19937 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:22:34.131851   19937 out.go:352] Setting JSON to false
	I0923 04:22:34.131865   19937 mustload.go:65] Loading cluster: ha-576000
	I0923 04:22:34.131908   19937 notify.go:220] Checking for updates...
	I0923 04:22:34.132135   19937 config.go:182] Loaded profile config "ha-576000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:22:34.132149   19937 status.go:174] checking status of ha-576000 ...
	I0923 04:22:34.132461   19937 status.go:364] ha-576000 host status = "Stopped" (err=<nil>)
	I0923 04:22:34.132466   19937 status.go:377] host is not running, skipping remaining checks
	I0923 04:22:34.132469   19937 status.go:176] ha-576000 status: &{Name:ha-576000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0923 04:22:34.133618   18914 retry.go:31] will retry after 6.439896218s: exit status 7
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-576000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-576000 status -v=7 --alsologtostderr: exit status 7 (74.644417ms)

                                                
                                                
-- stdout --
	ha-576000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:22:40.648394   19943 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:22:40.648595   19943 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:22:40.648600   19943 out.go:358] Setting ErrFile to fd 2...
	I0923 04:22:40.648603   19943 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:22:40.648784   19943 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:22:40.648933   19943 out.go:352] Setting JSON to false
	I0923 04:22:40.648947   19943 mustload.go:65] Loading cluster: ha-576000
	I0923 04:22:40.648999   19943 notify.go:220] Checking for updates...
	I0923 04:22:40.649213   19943 config.go:182] Loaded profile config "ha-576000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:22:40.649226   19943 status.go:174] checking status of ha-576000 ...
	I0923 04:22:40.649531   19943 status.go:364] ha-576000 host status = "Stopped" (err=<nil>)
	I0923 04:22:40.649536   19943 status.go:377] host is not running, skipping remaining checks
	I0923 04:22:40.649539   19943 status.go:176] ha-576000 status: &{Name:ha-576000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0923 04:22:40.650669   18914 retry.go:31] will retry after 13.935065465s: exit status 7
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-576000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-576000 status -v=7 --alsologtostderr: exit status 7 (74.75925ms)

                                                
                                                
-- stdout --
	ha-576000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:22:54.660616   19949 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:22:54.660853   19949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:22:54.660857   19949 out.go:358] Setting ErrFile to fd 2...
	I0923 04:22:54.660860   19949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:22:54.661042   19949 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:22:54.661194   19949 out.go:352] Setting JSON to false
	I0923 04:22:54.661208   19949 mustload.go:65] Loading cluster: ha-576000
	I0923 04:22:54.661248   19949 notify.go:220] Checking for updates...
	I0923 04:22:54.661503   19949 config.go:182] Loaded profile config "ha-576000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:22:54.661516   19949 status.go:174] checking status of ha-576000 ...
	I0923 04:22:54.661848   19949 status.go:364] ha-576000 host status = "Stopped" (err=<nil>)
	I0923 04:22:54.661853   19949 status.go:377] host is not running, skipping remaining checks
	I0923 04:22:54.661856   19949 status.go:176] ha-576000 status: &{Name:ha-576000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0923 04:22:54.663037   18914 retry.go:31] will retry after 21.486025129s: exit status 7
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-576000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-576000 status -v=7 --alsologtostderr: exit status 7 (74.144ms)

                                                
                                                
-- stdout --
	ha-576000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:23:16.223328   19961 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:23:16.223549   19961 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:23:16.223554   19961 out.go:358] Setting ErrFile to fd 2...
	I0923 04:23:16.223557   19961 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:23:16.223732   19961 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:23:16.223891   19961 out.go:352] Setting JSON to false
	I0923 04:23:16.223904   19961 mustload.go:65] Loading cluster: ha-576000
	I0923 04:23:16.223945   19961 notify.go:220] Checking for updates...
	I0923 04:23:16.224180   19961 config.go:182] Loaded profile config "ha-576000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:23:16.224191   19961 status.go:174] checking status of ha-576000 ...
	I0923 04:23:16.224511   19961 status.go:364] ha-576000 host status = "Stopped" (err=<nil>)
	I0923 04:23:16.224516   19961 status.go:377] host is not running, skipping remaining checks
	I0923 04:23:16.224518   19961 status.go:176] ha-576000 status: &{Name:ha-576000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-576000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-576000 -n ha-576000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-576000 -n ha-576000: exit status 7 (33.71775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-576000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (60.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-576000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-576000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-576000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-576000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-576000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-576000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-576000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-576000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-576000 -n ha-576000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-576000 -n ha-576000: exit status 7 (30.355042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-576000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-576000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-576000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-576000 -v=7 --alsologtostderr: (3.19255725s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-576000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-576000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.221929166s)

                                                
                                                
-- stdout --
	* [ha-576000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-576000" primary control-plane node in "ha-576000" cluster
	* Restarting existing qemu2 VM for "ha-576000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-576000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:23:19.625283   19990 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:23:19.625440   19990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:23:19.625445   19990 out.go:358] Setting ErrFile to fd 2...
	I0923 04:23:19.625448   19990 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:23:19.625613   19990 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:23:19.626852   19990 out.go:352] Setting JSON to false
	I0923 04:23:19.646178   19990 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8570,"bootTime":1727082029,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:23:19.646245   19990 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:23:19.650955   19990 out.go:177] * [ha-576000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:23:19.658874   19990 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 04:23:19.658946   19990 notify.go:220] Checking for updates...
	I0923 04:23:19.665799   19990 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:23:19.668827   19990 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:23:19.671882   19990 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:23:19.674780   19990 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	I0923 04:23:19.677833   19990 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 04:23:19.681162   19990 config.go:182] Loaded profile config "ha-576000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:23:19.681223   19990 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:23:19.685689   19990 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 04:23:19.692844   19990 start.go:297] selected driver: qemu2
	I0923 04:23:19.692849   19990 start.go:901] validating driver "qemu2" against &{Name:ha-576000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-576000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:23:19.692894   19990 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 04:23:19.695319   19990 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 04:23:19.695344   19990 cni.go:84] Creating CNI manager for ""
	I0923 04:23:19.695367   19990 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0923 04:23:19.695413   19990 start.go:340] cluster config:
	{Name:ha-576000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-576000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:23:19.699206   19990 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:23:19.706784   19990 out.go:177] * Starting "ha-576000" primary control-plane node in "ha-576000" cluster
	I0923 04:23:19.709674   19990 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 04:23:19.709689   19990 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 04:23:19.709697   19990 cache.go:56] Caching tarball of preloaded images
	I0923 04:23:19.709764   19990 preload.go:172] Found /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 04:23:19.709770   19990 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 04:23:19.709824   19990 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/ha-576000/config.json ...
	I0923 04:23:19.710291   19990 start.go:360] acquireMachinesLock for ha-576000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:23:19.710329   19990 start.go:364] duration metric: took 32.041µs to acquireMachinesLock for "ha-576000"
	I0923 04:23:19.710340   19990 start.go:96] Skipping create...Using existing machine configuration
	I0923 04:23:19.710344   19990 fix.go:54] fixHost starting: 
	I0923 04:23:19.710482   19990 fix.go:112] recreateIfNeeded on ha-576000: state=Stopped err=<nil>
	W0923 04:23:19.710491   19990 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 04:23:19.718843   19990 out.go:177] * Restarting existing qemu2 VM for "ha-576000" ...
	I0923 04:23:19.722728   19990 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:23:19.722759   19990 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/ha-576000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/ha-576000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/ha-576000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:58:af:99:d8:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/ha-576000/disk.qcow2
	I0923 04:23:19.724874   19990 main.go:141] libmachine: STDOUT: 
	I0923 04:23:19.724898   19990 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:23:19.724930   19990 fix.go:56] duration metric: took 14.583125ms for fixHost
	I0923 04:23:19.724934   19990 start.go:83] releasing machines lock for "ha-576000", held for 14.600042ms
	W0923 04:23:19.724943   19990 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:23:19.724975   19990 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:23:19.724980   19990 start.go:729] Will try again in 5 seconds ...
	I0923 04:23:24.727145   19990 start.go:360] acquireMachinesLock for ha-576000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:23:24.727568   19990 start.go:364] duration metric: took 300.333µs to acquireMachinesLock for "ha-576000"
	I0923 04:23:24.727711   19990 start.go:96] Skipping create...Using existing machine configuration
	I0923 04:23:24.727732   19990 fix.go:54] fixHost starting: 
	I0923 04:23:24.728459   19990 fix.go:112] recreateIfNeeded on ha-576000: state=Stopped err=<nil>
	W0923 04:23:24.728485   19990 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 04:23:24.733339   19990 out.go:177] * Restarting existing qemu2 VM for "ha-576000" ...
	I0923 04:23:24.739980   19990 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:23:24.740155   19990 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/ha-576000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/ha-576000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/ha-576000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:58:af:99:d8:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/ha-576000/disk.qcow2
	I0923 04:23:24.750020   19990 main.go:141] libmachine: STDOUT: 
	I0923 04:23:24.750123   19990 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:23:24.750223   19990 fix.go:56] duration metric: took 22.492375ms for fixHost
	I0923 04:23:24.750242   19990 start.go:83] releasing machines lock for "ha-576000", held for 22.652041ms
	W0923 04:23:24.750459   19990 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-576000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-576000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:23:24.757946   19990 out.go:201] 
	W0923 04:23:24.762079   19990 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:23:24.762113   19990 out.go:270] * 
	* 
	W0923 04:23:24.764431   19990 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:23:24.771045   19990 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-576000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-576000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-576000 -n ha-576000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-576000 -n ha-576000: exit status 7 (33.661ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-576000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-576000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-576000 node delete m03 -v=7 --alsologtostderr: exit status 83 (41.78675ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-576000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-576000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:23:24.918943   20013 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:23:24.919367   20013 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:23:24.919371   20013 out.go:358] Setting ErrFile to fd 2...
	I0923 04:23:24.919374   20013 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:23:24.919580   20013 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:23:24.919785   20013 mustload.go:65] Loading cluster: ha-576000
	I0923 04:23:24.919982   20013 config.go:182] Loaded profile config "ha-576000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:23:24.923052   20013 out.go:177] * The control-plane node ha-576000 host is not running: state=Stopped
	I0923 04:23:24.927062   20013 out.go:177]   To start a cluster, run: "minikube start -p ha-576000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-576000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-576000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-576000 status -v=7 --alsologtostderr: exit status 7 (30.791ms)

                                                
                                                
-- stdout --
	ha-576000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:23:24.961002   20015 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:23:24.961159   20015 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:23:24.961163   20015 out.go:358] Setting ErrFile to fd 2...
	I0923 04:23:24.961165   20015 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:23:24.961297   20015 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:23:24.961427   20015 out.go:352] Setting JSON to false
	I0923 04:23:24.961437   20015 mustload.go:65] Loading cluster: ha-576000
	I0923 04:23:24.961508   20015 notify.go:220] Checking for updates...
	I0923 04:23:24.961654   20015 config.go:182] Loaded profile config "ha-576000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:23:24.961662   20015 status.go:174] checking status of ha-576000 ...
	I0923 04:23:24.961896   20015 status.go:364] ha-576000 host status = "Stopped" (err=<nil>)
	I0923 04:23:24.961899   20015 status.go:377] host is not running, skipping remaining checks
	I0923 04:23:24.961901   20015 status.go:176] ha-576000 status: &{Name:ha-576000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-576000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-576000 -n ha-576000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-576000 -n ha-576000: exit status 7 (29.77825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-576000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-576000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-576000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-576000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-576000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-576000 -n ha-576000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-576000 -n ha-576000: exit status 7 (30.544625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-576000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-576000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-576000 stop -v=7 --alsologtostderr: (3.52461025s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-576000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-576000 status -v=7 --alsologtostderr: exit status 7 (65.357417ms)

                                                
                                                
-- stdout --
	ha-576000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:23:28.658944   20042 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:23:28.659144   20042 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:23:28.659148   20042 out.go:358] Setting ErrFile to fd 2...
	I0923 04:23:28.659152   20042 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:23:28.659306   20042 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:23:28.659460   20042 out.go:352] Setting JSON to false
	I0923 04:23:28.659473   20042 mustload.go:65] Loading cluster: ha-576000
	I0923 04:23:28.659501   20042 notify.go:220] Checking for updates...
	I0923 04:23:28.659771   20042 config.go:182] Loaded profile config "ha-576000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:23:28.659781   20042 status.go:174] checking status of ha-576000 ...
	I0923 04:23:28.660087   20042 status.go:364] ha-576000 host status = "Stopped" (err=<nil>)
	I0923 04:23:28.660092   20042 status.go:377] host is not running, skipping remaining checks
	I0923 04:23:28.660095   20042 status.go:176] ha-576000 status: &{Name:ha-576000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-576000 status -v=7 --alsologtostderr": ha-576000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-576000 status -v=7 --alsologtostderr": ha-576000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-576000 status -v=7 --alsologtostderr": ha-576000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-576000 -n ha-576000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-576000 -n ha-576000: exit status 7 (32.876375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-576000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-576000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-576000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.178960083s)

                                                
                                                
-- stdout --
	* [ha-576000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-576000" primary control-plane node in "ha-576000" cluster
	* Restarting existing qemu2 VM for "ha-576000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-576000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:23:28.722428   20046 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:23:28.722564   20046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:23:28.722568   20046 out.go:358] Setting ErrFile to fd 2...
	I0923 04:23:28.722570   20046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:23:28.722706   20046 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:23:28.723705   20046 out.go:352] Setting JSON to false
	I0923 04:23:28.739618   20046 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8579,"bootTime":1727082029,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:23:28.739690   20046 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:23:28.744210   20046 out.go:177] * [ha-576000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:23:28.752034   20046 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 04:23:28.752100   20046 notify.go:220] Checking for updates...
	I0923 04:23:28.756978   20046 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:23:28.760028   20046 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:23:28.761370   20046 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:23:28.763981   20046 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	I0923 04:23:28.767014   20046 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 04:23:28.770376   20046 config.go:182] Loaded profile config "ha-576000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:23:28.770643   20046 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:23:28.773967   20046 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 04:23:28.781020   20046 start.go:297] selected driver: qemu2
	I0923 04:23:28.781026   20046 start.go:901] validating driver "qemu2" against &{Name:ha-576000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-576000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:23:28.781073   20046 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 04:23:28.783230   20046 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 04:23:28.783255   20046 cni.go:84] Creating CNI manager for ""
	I0923 04:23:28.783279   20046 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0923 04:23:28.783317   20046 start.go:340] cluster config:
	{Name:ha-576000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-576000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:23:28.786742   20046 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:23:28.795014   20046 out.go:177] * Starting "ha-576000" primary control-plane node in "ha-576000" cluster
	I0923 04:23:28.798952   20046 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 04:23:28.798968   20046 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 04:23:28.798976   20046 cache.go:56] Caching tarball of preloaded images
	I0923 04:23:28.799032   20046 preload.go:172] Found /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 04:23:28.799038   20046 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 04:23:28.799098   20046 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/ha-576000/config.json ...
	I0923 04:23:28.799533   20046 start.go:360] acquireMachinesLock for ha-576000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:23:28.799562   20046 start.go:364] duration metric: took 22.542µs to acquireMachinesLock for "ha-576000"
	I0923 04:23:28.799572   20046 start.go:96] Skipping create...Using existing machine configuration
	I0923 04:23:28.799576   20046 fix.go:54] fixHost starting: 
	I0923 04:23:28.799700   20046 fix.go:112] recreateIfNeeded on ha-576000: state=Stopped err=<nil>
	W0923 04:23:28.799709   20046 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 04:23:28.805920   20046 out.go:177] * Restarting existing qemu2 VM for "ha-576000" ...
	I0923 04:23:28.809994   20046 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:23:28.810033   20046 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/ha-576000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/ha-576000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/ha-576000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:58:af:99:d8:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/ha-576000/disk.qcow2
	I0923 04:23:28.811974   20046 main.go:141] libmachine: STDOUT: 
	I0923 04:23:28.811993   20046 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:23:28.812032   20046 fix.go:56] duration metric: took 12.454375ms for fixHost
	I0923 04:23:28.812037   20046 start.go:83] releasing machines lock for "ha-576000", held for 12.471ms
	W0923 04:23:28.812044   20046 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:23:28.812075   20046 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:23:28.812079   20046 start.go:729] Will try again in 5 seconds ...
	I0923 04:23:33.814189   20046 start.go:360] acquireMachinesLock for ha-576000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:23:33.814566   20046 start.go:364] duration metric: took 313.833µs to acquireMachinesLock for "ha-576000"
	I0923 04:23:33.814687   20046 start.go:96] Skipping create...Using existing machine configuration
	I0923 04:23:33.814730   20046 fix.go:54] fixHost starting: 
	I0923 04:23:33.815439   20046 fix.go:112] recreateIfNeeded on ha-576000: state=Stopped err=<nil>
	W0923 04:23:33.815466   20046 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 04:23:33.827328   20046 out.go:177] * Restarting existing qemu2 VM for "ha-576000" ...
	I0923 04:23:33.831812   20046 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:23:33.831955   20046 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/ha-576000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/ha-576000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/ha-576000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:58:af:99:d8:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/ha-576000/disk.qcow2
	I0923 04:23:33.839816   20046 main.go:141] libmachine: STDOUT: 
	I0923 04:23:33.839872   20046 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:23:33.839952   20046 fix.go:56] duration metric: took 25.208625ms for fixHost
	I0923 04:23:33.839969   20046 start.go:83] releasing machines lock for "ha-576000", held for 25.385208ms
	W0923 04:23:33.840140   20046 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-576000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-576000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:23:33.845713   20046 out.go:201] 
	W0923 04:23:33.849848   20046 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:23:33.849873   20046 out.go:270] * 
	* 
	W0923 04:23:33.852602   20046 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:23:33.860945   20046 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-576000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-576000 -n ha-576000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-576000 -n ha-576000: exit status 7 (70.450083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-576000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-576000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-576000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-576000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-576000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-576000 -n ha-576000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-576000 -n ha-576000: exit status 7 (30.874084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-576000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-576000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-576000 --control-plane -v=7 --alsologtostderr: exit status 83 (41.714208ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-576000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-576000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:23:34.057412   20061 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:23:34.057564   20061 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:23:34.057568   20061 out.go:358] Setting ErrFile to fd 2...
	I0923 04:23:34.057570   20061 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:23:34.057704   20061 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:23:34.057992   20061 mustload.go:65] Loading cluster: ha-576000
	I0923 04:23:34.058192   20061 config.go:182] Loaded profile config "ha-576000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:23:34.062144   20061 out.go:177] * The control-plane node ha-576000 host is not running: state=Stopped
	I0923 04:23:34.065095   20061 out.go:177]   To start a cluster, run: "minikube start -p ha-576000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-576000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-576000 -n ha-576000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-576000 -n ha-576000: exit status 7 (30.408042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-576000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-576000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-576000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-576000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-576000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-576000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-576000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-576000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-576000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-576000 -n ha-576000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-576000 -n ha-576000: exit status 7 (30.852333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-576000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.94s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-029000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-029000 --driver=qemu2 : exit status 80 (9.866995125s)

                                                
                                                
-- stdout --
	* [image-029000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-029000" primary control-plane node in "image-029000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-029000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-029000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-029000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-029000 -n image-029000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-029000 -n image-029000: exit status 7 (69.003459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-029000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.94s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.91s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-733000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-733000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.910444792s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7cc216ea-2b46-4bc6-8cbc-1bec0f8a2d57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-733000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"63b46227-9101-4790-8829-e48a694e3d4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19690"}}
	{"specversion":"1.0","id":"3a5d0809-b9cd-47a5-be69-09a6c1224836","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig"}}
	{"specversion":"1.0","id":"839b93c6-fbc9-4d68-b77e-599c6da5f828","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"445da659-3b48-4703-8810-3ac3c7744fab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f66cc42d-27d3-4b06-ab2a-4d0a9875a9dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube"}}
	{"specversion":"1.0","id":"797de59f-6572-4fb2-846a-a7b2a14f54e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"12da7d75-b93a-4bd0-8148-96dbf7e0827d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f38ea1e8-da61-4a1f-8c38-a58f075bb84c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"47c17ed4-7ca0-49fb-bdd6-a8ff331872a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-733000\" primary control-plane node in \"json-output-733000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e4c10838-791a-4f8c-8637-9546cb32f4ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"3f742826-8ad2-4735-9c5f-0bfcf3325abf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-733000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"4a143ce6-2f24-47cb-8c27-853138643631","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"eeafdb2c-8b80-4228-a396-184ee7973f32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"c7282a80-aabf-49c1-a743-bb696e6e408c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-733000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"e032d278-08f3-4de4-b47e-069495b27406","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"24c2b232-7d95-43e8-859a-8cbf9895e13f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-733000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.91s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-733000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-733000 --output=json --user=testUser: exit status 83 (79.822292ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"558f6906-91d7-448e-b096-d13af20292d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-733000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"6402e660-bf2b-4ec0-b3af-921fb9a4b8c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-733000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-733000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-733000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-733000 --output=json --user=testUser: exit status 83 (46.197417ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-733000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-733000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-733000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-733000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.14s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-503000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-503000 --driver=qemu2 : exit status 80 (9.837752708s)

                                                
                                                
-- stdout --
	* [first-503000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-503000" primary control-plane node in "first-503000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-503000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-503000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-503000 --driver=qemu2 ": exit status 80
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-23 04:24:07.643828 -0700 PDT m=+475.649200043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-505000 -n second-505000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-505000 -n second-505000: exit status 85 (81.801542ms)

                                                
                                                
-- stdout --
	* Profile "second-505000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-505000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-505000" host is not running, skipping log retrieval (state="* Profile \"second-505000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-505000\"")
helpers_test.go:175: Cleaning up "second-505000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-505000
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-23 04:24:07.837715 -0700 PDT m=+475.843087460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-503000 -n first-503000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-503000 -n first-503000: exit status 7 (31.087875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-503000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-503000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-503000
--- FAIL: TestMinikubeProfile (10.14s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.07s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-864000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-864000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.997337708s)

                                                
                                                
-- stdout --
	* [mount-start-1-864000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-864000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-864000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-864000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-864000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-864000 -n mount-start-1-864000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-864000 -n mount-start-1-864000: exit status 7 (70.13ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-864000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.07s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-090000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-090000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (10.013127458s)

                                                
                                                
-- stdout --
	* [multinode-090000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-090000" primary control-plane node in "multinode-090000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-090000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:24:18.235347   20225 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:24:18.235490   20225 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:24:18.235493   20225 out.go:358] Setting ErrFile to fd 2...
	I0923 04:24:18.235496   20225 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:24:18.235625   20225 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:24:18.236685   20225 out.go:352] Setting JSON to false
	I0923 04:24:18.253011   20225 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8629,"bootTime":1727082029,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:24:18.253072   20225 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:24:18.259247   20225 out.go:177] * [multinode-090000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:24:18.267186   20225 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 04:24:18.267230   20225 notify.go:220] Checking for updates...
	I0923 04:24:18.275086   20225 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:24:18.278158   20225 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:24:18.281164   20225 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:24:18.284142   20225 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	I0923 04:24:18.287248   20225 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 04:24:18.288849   20225 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:24:18.292101   20225 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 04:24:18.298053   20225 start.go:297] selected driver: qemu2
	I0923 04:24:18.298059   20225 start.go:901] validating driver "qemu2" against <nil>
	I0923 04:24:18.298068   20225 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 04:24:18.300228   20225 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 04:24:18.304109   20225 out.go:177] * Automatically selected the socket_vmnet network
	I0923 04:24:18.307251   20225 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 04:24:18.307270   20225 cni.go:84] Creating CNI manager for ""
	I0923 04:24:18.307299   20225 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0923 04:24:18.307312   20225 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 04:24:18.307344   20225 start.go:340] cluster config:
	{Name:multinode-090000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-090000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:24:18.311176   20225 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:24:18.320115   20225 out.go:177] * Starting "multinode-090000" primary control-plane node in "multinode-090000" cluster
	I0923 04:24:18.323994   20225 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 04:24:18.324016   20225 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 04:24:18.324030   20225 cache.go:56] Caching tarball of preloaded images
	I0923 04:24:18.324102   20225 preload.go:172] Found /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 04:24:18.324111   20225 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 04:24:18.324352   20225 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/multinode-090000/config.json ...
	I0923 04:24:18.324367   20225 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/multinode-090000/config.json: {Name:mkbfd19fd01efdc804ddf9e79675b81702bc218a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 04:24:18.324608   20225 start.go:360] acquireMachinesLock for multinode-090000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:24:18.324646   20225 start.go:364] duration metric: took 31.833µs to acquireMachinesLock for "multinode-090000"
	I0923 04:24:18.324661   20225 start.go:93] Provisioning new machine with config: &{Name:multinode-090000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-090000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:24:18.324696   20225 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:24:18.331946   20225 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 04:24:18.350139   20225 start.go:159] libmachine.API.Create for "multinode-090000" (driver="qemu2")
	I0923 04:24:18.350171   20225 client.go:168] LocalClient.Create starting
	I0923 04:24:18.350236   20225 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:24:18.350266   20225 main.go:141] libmachine: Decoding PEM data...
	I0923 04:24:18.350281   20225 main.go:141] libmachine: Parsing certificate...
	I0923 04:24:18.350317   20225 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:24:18.350341   20225 main.go:141] libmachine: Decoding PEM data...
	I0923 04:24:18.350353   20225 main.go:141] libmachine: Parsing certificate...
	I0923 04:24:18.350771   20225 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:24:18.518529   20225 main.go:141] libmachine: Creating SSH key...
	I0923 04:24:18.723415   20225 main.go:141] libmachine: Creating Disk image...
	I0923 04:24:18.723423   20225 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:24:18.723675   20225 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/multinode-090000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/multinode-090000/disk.qcow2
	I0923 04:24:18.733367   20225 main.go:141] libmachine: STDOUT: 
	I0923 04:24:18.733395   20225 main.go:141] libmachine: STDERR: 
	I0923 04:24:18.733471   20225 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/multinode-090000/disk.qcow2 +20000M
	I0923 04:24:18.741304   20225 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:24:18.741321   20225 main.go:141] libmachine: STDERR: 
	I0923 04:24:18.741345   20225 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/multinode-090000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/multinode-090000/disk.qcow2
	I0923 04:24:18.741350   20225 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:24:18.741361   20225 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:24:18.741395   20225 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/multinode-090000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/multinode-090000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/multinode-090000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:ad:e5:f0:99:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/multinode-090000/disk.qcow2
	I0923 04:24:18.743009   20225 main.go:141] libmachine: STDOUT: 
	I0923 04:24:18.743024   20225 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:24:18.743045   20225 client.go:171] duration metric: took 392.869042ms to LocalClient.Create
	I0923 04:24:20.745249   20225 start.go:128] duration metric: took 2.420536459s to createHost
	I0923 04:24:20.745350   20225 start.go:83] releasing machines lock for "multinode-090000", held for 2.420707125s
	W0923 04:24:20.745403   20225 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:24:20.766689   20225 out.go:177] * Deleting "multinode-090000" in qemu2 ...
	W0923 04:24:20.802607   20225 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:24:20.802639   20225 start.go:729] Will try again in 5 seconds ...
	I0923 04:24:25.804872   20225 start.go:360] acquireMachinesLock for multinode-090000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:24:25.805292   20225 start.go:364] duration metric: took 332.792µs to acquireMachinesLock for "multinode-090000"
	I0923 04:24:25.805409   20225 start.go:93] Provisioning new machine with config: &{Name:multinode-090000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-090000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:24:25.805705   20225 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:24:25.819032   20225 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 04:24:25.872983   20225 start.go:159] libmachine.API.Create for "multinode-090000" (driver="qemu2")
	I0923 04:24:25.873042   20225 client.go:168] LocalClient.Create starting
	I0923 04:24:25.873200   20225 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:24:25.873269   20225 main.go:141] libmachine: Decoding PEM data...
	I0923 04:24:25.873286   20225 main.go:141] libmachine: Parsing certificate...
	I0923 04:24:25.873355   20225 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:24:25.873400   20225 main.go:141] libmachine: Decoding PEM data...
	I0923 04:24:25.873417   20225 main.go:141] libmachine: Parsing certificate...
	I0923 04:24:25.873970   20225 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:24:26.053182   20225 main.go:141] libmachine: Creating SSH key...
	I0923 04:24:26.143401   20225 main.go:141] libmachine: Creating Disk image...
	I0923 04:24:26.143413   20225 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:24:26.143612   20225 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/multinode-090000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/multinode-090000/disk.qcow2
	I0923 04:24:26.152631   20225 main.go:141] libmachine: STDOUT: 
	I0923 04:24:26.152652   20225 main.go:141] libmachine: STDERR: 
	I0923 04:24:26.152707   20225 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/multinode-090000/disk.qcow2 +20000M
	I0923 04:24:26.160632   20225 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:24:26.160646   20225 main.go:141] libmachine: STDERR: 
	I0923 04:24:26.160658   20225 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/multinode-090000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/multinode-090000/disk.qcow2
	I0923 04:24:26.160663   20225 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:24:26.160673   20225 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:24:26.160712   20225 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/multinode-090000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/multinode-090000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/multinode-090000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:4e:8a:d5:38:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/multinode-090000/disk.qcow2
	I0923 04:24:26.162325   20225 main.go:141] libmachine: STDOUT: 
	I0923 04:24:26.162345   20225 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:24:26.162358   20225 client.go:171] duration metric: took 289.312833ms to LocalClient.Create
	I0923 04:24:28.164551   20225 start.go:128] duration metric: took 2.358812708s to createHost
	I0923 04:24:28.164661   20225 start.go:83] releasing machines lock for "multinode-090000", held for 2.359357958s
	W0923 04:24:28.165082   20225 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-090000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-090000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:24:28.183935   20225 out.go:201] 
	W0923 04:24:28.187907   20225 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:24:28.187933   20225 out.go:270] * 
	* 
	W0923 04:24:28.190827   20225 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:24:28.205866   20225 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-090000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-090000 -n multinode-090000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-090000 -n multinode-090000: exit status 7 (69.312333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-090000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.08s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (97.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-090000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-090000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (62.060542ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-090000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-090000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-090000 -- rollout status deployment/busybox: exit status 1 (57.841959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-090000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-090000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-090000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.815334ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-090000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 04:24:28.468166   18914 retry.go:31] will retry after 542.866941ms: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-090000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-090000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.242917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-090000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 04:24:29.118657   18914 retry.go:31] will retry after 1.804934635s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-090000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-090000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.185667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-090000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 04:24:31.031095   18914 retry.go:31] will retry after 2.058048939s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-090000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-090000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.594666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-090000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 04:24:33.194186   18914 retry.go:31] will retry after 2.035765877s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-090000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-090000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.861167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-090000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 04:24:35.338309   18914 retry.go:31] will retry after 7.579462719s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-090000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-090000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.072208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-090000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 04:24:43.022219   18914 retry.go:31] will retry after 6.97914218s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-090000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-090000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.310958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-090000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 04:24:50.110041   18914 retry.go:31] will retry after 11.641332997s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-090000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-090000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.865833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-090000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 04:25:01.857721   18914 retry.go:31] will retry after 18.083540139s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-090000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-090000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.52625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-090000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 04:25:20.048103   18914 retry.go:31] will retry after 14.968829247s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-090000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-090000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.186458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-090000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I0923 04:25:35.123444   18914 retry.go:31] will retry after 30.38496527s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-090000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-090000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.892042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-090000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-090000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-090000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.674ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-090000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-090000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-090000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.78125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-090000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-090000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-090000 -- exec  -- nslookup kubernetes.default: exit status 1 (58.416875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-090000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-090000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-090000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.573084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-090000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-090000 -n multinode-090000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-090000 -n multinode-090000: exit status 7 (30.770416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-090000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (97.59s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-090000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-090000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.719667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-090000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-090000 -n multinode-090000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-090000 -n multinode-090000: exit status 7 (31.006042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-090000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-090000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-090000 -v 3 --alsologtostderr: exit status 83 (43.2565ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-090000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-090000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:26:05.996693   20347 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:26:05.996859   20347 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:26:05.996862   20347 out.go:358] Setting ErrFile to fd 2...
	I0923 04:26:05.996864   20347 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:26:05.997007   20347 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:26:05.997239   20347 mustload.go:65] Loading cluster: multinode-090000
	I0923 04:26:05.997453   20347 config.go:182] Loaded profile config "multinode-090000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:26:06.000953   20347 out.go:177] * The control-plane node multinode-090000 host is not running: state=Stopped
	I0923 04:26:06.005958   20347 out.go:177]   To start a cluster, run: "minikube start -p multinode-090000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-090000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-090000 -n multinode-090000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-090000 -n multinode-090000: exit status 7 (30.636791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-090000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-090000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-090000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.48175ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-090000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-090000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-090000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-090000 -n multinode-090000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-090000 -n multinode-090000: exit status 7 (31.302375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-090000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-090000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-090000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"multinode-090000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMN
UMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"multinode-090000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVe
rsion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\"
:\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-090000 -n multinode-090000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-090000 -n multinode-090000: exit status 7 (30.973542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-090000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-090000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-090000 status --output json --alsologtostderr: exit status 7 (30.704958ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-090000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:26:06.207589   20359 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:26:06.207732   20359 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:26:06.207735   20359 out.go:358] Setting ErrFile to fd 2...
	I0923 04:26:06.207738   20359 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:26:06.207895   20359 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:26:06.208009   20359 out.go:352] Setting JSON to true
	I0923 04:26:06.208023   20359 mustload.go:65] Loading cluster: multinode-090000
	I0923 04:26:06.208086   20359 notify.go:220] Checking for updates...
	I0923 04:26:06.208250   20359 config.go:182] Loaded profile config "multinode-090000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:26:06.208259   20359 status.go:174] checking status of multinode-090000 ...
	I0923 04:26:06.208504   20359 status.go:364] multinode-090000 host status = "Stopped" (err=<nil>)
	I0923 04:26:06.208508   20359 status.go:377] host is not running, skipping remaining checks
	I0923 04:26:06.208510   20359 status.go:176] multinode-090000 status: &{Name:multinode-090000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-090000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-090000 -n multinode-090000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-090000 -n multinode-090000: exit status 7 (30.670542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-090000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-090000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-090000 node stop m03: exit status 85 (49.676417ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-090000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-090000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-090000 status: exit status 7 (30.386625ms)

                                                
                                                
-- stdout --
	multinode-090000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-090000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-090000 status --alsologtostderr: exit status 7 (30.601417ms)

                                                
                                                
-- stdout --
	multinode-090000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:26:06.349669   20367 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:26:06.349861   20367 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:26:06.349864   20367 out.go:358] Setting ErrFile to fd 2...
	I0923 04:26:06.349867   20367 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:26:06.350002   20367 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:26:06.350131   20367 out.go:352] Setting JSON to false
	I0923 04:26:06.350141   20367 mustload.go:65] Loading cluster: multinode-090000
	I0923 04:26:06.350210   20367 notify.go:220] Checking for updates...
	I0923 04:26:06.350341   20367 config.go:182] Loaded profile config "multinode-090000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:26:06.350349   20367 status.go:174] checking status of multinode-090000 ...
	I0923 04:26:06.350597   20367 status.go:364] multinode-090000 host status = "Stopped" (err=<nil>)
	I0923 04:26:06.350601   20367 status.go:377] host is not running, skipping remaining checks
	I0923 04:26:06.350603   20367 status.go:176] multinode-090000 status: &{Name:multinode-090000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-090000 status --alsologtostderr": multinode-090000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-090000 -n multinode-090000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-090000 -n multinode-090000: exit status 7 (32.697709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-090000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (57.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-090000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-090000 node start m03 -v=7 --alsologtostderr: exit status 85 (46.925125ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:26:06.413587   20371 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:26:06.414009   20371 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:26:06.414013   20371 out.go:358] Setting ErrFile to fd 2...
	I0923 04:26:06.414016   20371 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:26:06.414169   20371 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:26:06.414384   20371 mustload.go:65] Loading cluster: multinode-090000
	I0923 04:26:06.414570   20371 config.go:182] Loaded profile config "multinode-090000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:26:06.417510   20371 out.go:201] 
	W0923 04:26:06.421454   20371 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0923 04:26:06.421460   20371 out.go:270] * 
	* 
	W0923 04:26:06.423680   20371 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:26:06.427310   20371 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0923 04:26:06.413587   20371 out.go:345] Setting OutFile to fd 1 ...
I0923 04:26:06.414009   20371 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 04:26:06.414013   20371 out.go:358] Setting ErrFile to fd 2...
I0923 04:26:06.414016   20371 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 04:26:06.414169   20371 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
I0923 04:26:06.414384   20371 mustload.go:65] Loading cluster: multinode-090000
I0923 04:26:06.414570   20371 config.go:182] Loaded profile config "multinode-090000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 04:26:06.417510   20371 out.go:201] 
W0923 04:26:06.421454   20371 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0923 04:26:06.421460   20371 out.go:270] * 
* 
W0923 04:26:06.423680   20371 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0923 04:26:06.427310   20371 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-090000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-090000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-090000 status -v=7 --alsologtostderr: exit status 7 (30.968916ms)

                                                
                                                
-- stdout --
	multinode-090000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:26:06.460802   20373 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:26:06.460933   20373 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:26:06.460937   20373 out.go:358] Setting ErrFile to fd 2...
	I0923 04:26:06.460939   20373 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:26:06.461074   20373 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:26:06.461193   20373 out.go:352] Setting JSON to false
	I0923 04:26:06.461203   20373 mustload.go:65] Loading cluster: multinode-090000
	I0923 04:26:06.461274   20373 notify.go:220] Checking for updates...
	I0923 04:26:06.461392   20373 config.go:182] Loaded profile config "multinode-090000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:26:06.461400   20373 status.go:174] checking status of multinode-090000 ...
	I0923 04:26:06.461624   20373 status.go:364] multinode-090000 host status = "Stopped" (err=<nil>)
	I0923 04:26:06.461628   20373 status.go:377] host is not running, skipping remaining checks
	I0923 04:26:06.461630   20373 status.go:176] multinode-090000 status: &{Name:multinode-090000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0923 04:26:06.462528   18914 retry.go:31] will retry after 1.318099237s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-090000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-090000 status -v=7 --alsologtostderr: exit status 7 (72.596625ms)

                                                
                                                
-- stdout --
	multinode-090000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:26:07.853345   20378 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:26:07.853555   20378 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:26:07.853563   20378 out.go:358] Setting ErrFile to fd 2...
	I0923 04:26:07.853566   20378 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:26:07.853727   20378 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:26:07.853891   20378 out.go:352] Setting JSON to false
	I0923 04:26:07.853905   20378 mustload.go:65] Loading cluster: multinode-090000
	I0923 04:26:07.853937   20378 notify.go:220] Checking for updates...
	I0923 04:26:07.854153   20378 config.go:182] Loaded profile config "multinode-090000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:26:07.854164   20378 status.go:174] checking status of multinode-090000 ...
	I0923 04:26:07.854519   20378 status.go:364] multinode-090000 host status = "Stopped" (err=<nil>)
	I0923 04:26:07.854524   20378 status.go:377] host is not running, skipping remaining checks
	I0923 04:26:07.854527   20378 status.go:176] multinode-090000 status: &{Name:multinode-090000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0923 04:26:07.855689   18914 retry.go:31] will retry after 1.326987131s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-090000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-090000 status -v=7 --alsologtostderr: exit status 7 (75.43475ms)

                                                
                                                
-- stdout --
	multinode-090000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:26:09.258241   20380 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:26:09.258453   20380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:26:09.258458   20380 out.go:358] Setting ErrFile to fd 2...
	I0923 04:26:09.258461   20380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:26:09.258638   20380 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:26:09.258780   20380 out.go:352] Setting JSON to false
	I0923 04:26:09.258794   20380 mustload.go:65] Loading cluster: multinode-090000
	I0923 04:26:09.258842   20380 notify.go:220] Checking for updates...
	I0923 04:26:09.259101   20380 config.go:182] Loaded profile config "multinode-090000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:26:09.259112   20380 status.go:174] checking status of multinode-090000 ...
	I0923 04:26:09.259432   20380 status.go:364] multinode-090000 host status = "Stopped" (err=<nil>)
	I0923 04:26:09.259436   20380 status.go:377] host is not running, skipping remaining checks
	I0923 04:26:09.259439   20380 status.go:176] multinode-090000 status: &{Name:multinode-090000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0923 04:26:09.260449   18914 retry.go:31] will retry after 1.704393619s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-090000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-090000 status -v=7 --alsologtostderr: exit status 7 (74.445042ms)

                                                
                                                
-- stdout --
	multinode-090000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:26:11.039455   20382 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:26:11.039669   20382 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:26:11.039674   20382 out.go:358] Setting ErrFile to fd 2...
	I0923 04:26:11.039676   20382 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:26:11.039845   20382 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:26:11.040038   20382 out.go:352] Setting JSON to false
	I0923 04:26:11.040052   20382 mustload.go:65] Loading cluster: multinode-090000
	I0923 04:26:11.040098   20382 notify.go:220] Checking for updates...
	I0923 04:26:11.040288   20382 config.go:182] Loaded profile config "multinode-090000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:26:11.040299   20382 status.go:174] checking status of multinode-090000 ...
	I0923 04:26:11.040617   20382 status.go:364] multinode-090000 host status = "Stopped" (err=<nil>)
	I0923 04:26:11.040622   20382 status.go:377] host is not running, skipping remaining checks
	I0923 04:26:11.040625   20382 status.go:176] multinode-090000 status: &{Name:multinode-090000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0923 04:26:11.041667   18914 retry.go:31] will retry after 4.190293976s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-090000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-090000 status -v=7 --alsologtostderr: exit status 7 (74.193542ms)

                                                
                                                
-- stdout --
	multinode-090000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:26:15.306308   20386 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:26:15.306539   20386 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:26:15.306543   20386 out.go:358] Setting ErrFile to fd 2...
	I0923 04:26:15.306547   20386 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:26:15.306729   20386 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:26:15.306889   20386 out.go:352] Setting JSON to false
	I0923 04:26:15.306904   20386 mustload.go:65] Loading cluster: multinode-090000
	I0923 04:26:15.306952   20386 notify.go:220] Checking for updates...
	I0923 04:26:15.307195   20386 config.go:182] Loaded profile config "multinode-090000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:26:15.307206   20386 status.go:174] checking status of multinode-090000 ...
	I0923 04:26:15.307543   20386 status.go:364] multinode-090000 host status = "Stopped" (err=<nil>)
	I0923 04:26:15.307548   20386 status.go:377] host is not running, skipping remaining checks
	I0923 04:26:15.307551   20386 status.go:176] multinode-090000 status: &{Name:multinode-090000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0923 04:26:15.308618   18914 retry.go:31] will retry after 4.81217767s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-090000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-090000 status -v=7 --alsologtostderr: exit status 7 (72.260291ms)

                                                
                                                
-- stdout --
	multinode-090000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:26:20.193185   20390 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:26:20.193389   20390 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:26:20.193393   20390 out.go:358] Setting ErrFile to fd 2...
	I0923 04:26:20.193397   20390 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:26:20.193570   20390 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:26:20.193722   20390 out.go:352] Setting JSON to false
	I0923 04:26:20.193736   20390 mustload.go:65] Loading cluster: multinode-090000
	I0923 04:26:20.193776   20390 notify.go:220] Checking for updates...
	I0923 04:26:20.194011   20390 config.go:182] Loaded profile config "multinode-090000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:26:20.194021   20390 status.go:174] checking status of multinode-090000 ...
	I0923 04:26:20.194352   20390 status.go:364] multinode-090000 host status = "Stopped" (err=<nil>)
	I0923 04:26:20.194358   20390 status.go:377] host is not running, skipping remaining checks
	I0923 04:26:20.194360   20390 status.go:176] multinode-090000 status: &{Name:multinode-090000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0923 04:26:20.195470   18914 retry.go:31] will retry after 7.057530668s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-090000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-090000 status -v=7 --alsologtostderr: exit status 7 (74.932584ms)

                                                
                                                
-- stdout --
	multinode-090000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:26:27.328236   20394 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:26:27.328408   20394 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:26:27.328412   20394 out.go:358] Setting ErrFile to fd 2...
	I0923 04:26:27.328416   20394 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:26:27.328607   20394 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:26:27.328754   20394 out.go:352] Setting JSON to false
	I0923 04:26:27.328766   20394 mustload.go:65] Loading cluster: multinode-090000
	I0923 04:26:27.328807   20394 notify.go:220] Checking for updates...
	I0923 04:26:27.329033   20394 config.go:182] Loaded profile config "multinode-090000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:26:27.329044   20394 status.go:174] checking status of multinode-090000 ...
	I0923 04:26:27.329368   20394 status.go:364] multinode-090000 host status = "Stopped" (err=<nil>)
	I0923 04:26:27.329373   20394 status.go:377] host is not running, skipping remaining checks
	I0923 04:26:27.329375   20394 status.go:176] multinode-090000 status: &{Name:multinode-090000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0923 04:26:27.330428   18914 retry.go:31] will retry after 13.715056121s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-090000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-090000 status -v=7 --alsologtostderr: exit status 7 (73.996458ms)

                                                
                                                
-- stdout --
	multinode-090000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:26:41.120484   20403 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:26:41.120675   20403 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:26:41.120679   20403 out.go:358] Setting ErrFile to fd 2...
	I0923 04:26:41.120682   20403 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:26:41.120861   20403 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:26:41.121051   20403 out.go:352] Setting JSON to false
	I0923 04:26:41.121064   20403 mustload.go:65] Loading cluster: multinode-090000
	I0923 04:26:41.121110   20403 notify.go:220] Checking for updates...
	I0923 04:26:41.121340   20403 config.go:182] Loaded profile config "multinode-090000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:26:41.121351   20403 status.go:174] checking status of multinode-090000 ...
	I0923 04:26:41.121669   20403 status.go:364] multinode-090000 host status = "Stopped" (err=<nil>)
	I0923 04:26:41.121674   20403 status.go:377] host is not running, skipping remaining checks
	I0923 04:26:41.121676   20403 status.go:176] multinode-090000 status: &{Name:multinode-090000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0923 04:26:41.122815   18914 retry.go:31] will retry after 22.297713811s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-090000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-090000 status -v=7 --alsologtostderr: exit status 7 (76.214375ms)

                                                
                                                
-- stdout --
	multinode-090000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:27:03.500399   20421 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:27:03.500608   20421 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:27:03.500613   20421 out.go:358] Setting ErrFile to fd 2...
	I0923 04:27:03.500616   20421 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:27:03.500775   20421 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:27:03.500943   20421 out.go:352] Setting JSON to false
	I0923 04:27:03.500960   20421 mustload.go:65] Loading cluster: multinode-090000
	I0923 04:27:03.500999   20421 notify.go:220] Checking for updates...
	I0923 04:27:03.501231   20421 config.go:182] Loaded profile config "multinode-090000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:27:03.501242   20421 status.go:174] checking status of multinode-090000 ...
	I0923 04:27:03.501592   20421 status.go:364] multinode-090000 host status = "Stopped" (err=<nil>)
	I0923 04:27:03.501597   20421 status.go:377] host is not running, skipping remaining checks
	I0923 04:27:03.501599   20421 status.go:176] multinode-090000 status: &{Name:multinode-090000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-090000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-090000 -n multinode-090000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-090000 -n multinode-090000: exit status 7 (33.543167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-090000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (57.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-090000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-090000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-090000: (3.489564291s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-090000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-090000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.22785675s)

                                                
                                                
-- stdout --
	* [multinode-090000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-090000" primary control-plane node in "multinode-090000" cluster
	* Restarting existing qemu2 VM for "multinode-090000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-090000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:27:07.121444   20451 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:27:07.121598   20451 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:27:07.121603   20451 out.go:358] Setting ErrFile to fd 2...
	I0923 04:27:07.121606   20451 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:27:07.121811   20451 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:27:07.123017   20451 out.go:352] Setting JSON to false
	I0923 04:27:07.142273   20451 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8798,"bootTime":1727082029,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:27:07.142351   20451 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:27:07.147053   20451 out.go:177] * [multinode-090000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:27:07.154939   20451 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 04:27:07.155001   20451 notify.go:220] Checking for updates...
	I0923 04:27:07.161861   20451 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:27:07.165020   20451 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:27:07.168903   20451 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:27:07.171881   20451 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	I0923 04:27:07.174999   20451 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 04:27:07.178273   20451 config.go:182] Loaded profile config "multinode-090000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:27:07.178330   20451 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:27:07.181923   20451 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 04:27:07.188977   20451 start.go:297] selected driver: qemu2
	I0923 04:27:07.188984   20451 start.go:901] validating driver "qemu2" against &{Name:multinode-090000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-090000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:27:07.189060   20451 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 04:27:07.191705   20451 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 04:27:07.191742   20451 cni.go:84] Creating CNI manager for ""
	I0923 04:27:07.191773   20451 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0923 04:27:07.191828   20451 start.go:340] cluster config:
	{Name:multinode-090000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-090000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:27:07.195892   20451 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:27:07.205009   20451 out.go:177] * Starting "multinode-090000" primary control-plane node in "multinode-090000" cluster
	I0923 04:27:07.208990   20451 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 04:27:07.209007   20451 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 04:27:07.209015   20451 cache.go:56] Caching tarball of preloaded images
	I0923 04:27:07.209087   20451 preload.go:172] Found /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 04:27:07.209101   20451 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 04:27:07.209158   20451 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/multinode-090000/config.json ...
	I0923 04:27:07.209613   20451 start.go:360] acquireMachinesLock for multinode-090000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:27:07.209652   20451 start.go:364] duration metric: took 32.375µs to acquireMachinesLock for "multinode-090000"
	I0923 04:27:07.209663   20451 start.go:96] Skipping create...Using existing machine configuration
	I0923 04:27:07.209667   20451 fix.go:54] fixHost starting: 
	I0923 04:27:07.209795   20451 fix.go:112] recreateIfNeeded on multinode-090000: state=Stopped err=<nil>
	W0923 04:27:07.209807   20451 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 04:27:07.217101   20451 out.go:177] * Restarting existing qemu2 VM for "multinode-090000" ...
	I0923 04:27:07.220939   20451 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:27:07.220990   20451 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/multinode-090000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/multinode-090000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/multinode-090000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:4e:8a:d5:38:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/multinode-090000/disk.qcow2
	I0923 04:27:07.223269   20451 main.go:141] libmachine: STDOUT: 
	I0923 04:27:07.223289   20451 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:27:07.223321   20451 fix.go:56] duration metric: took 13.650708ms for fixHost
	I0923 04:27:07.223326   20451 start.go:83] releasing machines lock for "multinode-090000", held for 13.667917ms
	W0923 04:27:07.223334   20451 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:27:07.223383   20451 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:27:07.223388   20451 start.go:729] Will try again in 5 seconds ...
	I0923 04:27:12.225831   20451 start.go:360] acquireMachinesLock for multinode-090000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:27:12.226174   20451 start.go:364] duration metric: took 267.084µs to acquireMachinesLock for "multinode-090000"
	I0923 04:27:12.226296   20451 start.go:96] Skipping create...Using existing machine configuration
	I0923 04:27:12.226315   20451 fix.go:54] fixHost starting: 
	I0923 04:27:12.227058   20451 fix.go:112] recreateIfNeeded on multinode-090000: state=Stopped err=<nil>
	W0923 04:27:12.227087   20451 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 04:27:12.235439   20451 out.go:177] * Restarting existing qemu2 VM for "multinode-090000" ...
	I0923 04:27:12.240435   20451 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:27:12.240681   20451 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/multinode-090000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/multinode-090000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/multinode-090000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:4e:8a:d5:38:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/multinode-090000/disk.qcow2
	I0923 04:27:12.249549   20451 main.go:141] libmachine: STDOUT: 
	I0923 04:27:12.249606   20451 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:27:12.249676   20451 fix.go:56] duration metric: took 23.3585ms for fixHost
	I0923 04:27:12.249693   20451 start.go:83] releasing machines lock for "multinode-090000", held for 23.4915ms
	W0923 04:27:12.249874   20451 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-090000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-090000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:27:12.256551   20451 out.go:201] 
	W0923 04:27:12.260476   20451 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:27:12.260516   20451 out.go:270] * 
	* 
	W0923 04:27:12.263068   20451 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:27:12.272434   20451 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-090000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-090000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-090000 -n multinode-090000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-090000 -n multinode-090000: exit status 7 (34.496666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-090000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.85s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-090000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-090000 node delete m03: exit status 83 (42.666458ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-090000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-090000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-090000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-090000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-090000 status --alsologtostderr: exit status 7 (30.578333ms)

                                                
                                                
-- stdout --
	multinode-090000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:27:12.461446   20470 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:27:12.461595   20470 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:27:12.461598   20470 out.go:358] Setting ErrFile to fd 2...
	I0923 04:27:12.461600   20470 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:27:12.461738   20470 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:27:12.461867   20470 out.go:352] Setting JSON to false
	I0923 04:27:12.461877   20470 mustload.go:65] Loading cluster: multinode-090000
	I0923 04:27:12.461934   20470 notify.go:220] Checking for updates...
	I0923 04:27:12.462088   20470 config.go:182] Loaded profile config "multinode-090000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:27:12.462096   20470 status.go:174] checking status of multinode-090000 ...
	I0923 04:27:12.462338   20470 status.go:364] multinode-090000 host status = "Stopped" (err=<nil>)
	I0923 04:27:12.462342   20470 status.go:377] host is not running, skipping remaining checks
	I0923 04:27:12.462344   20470 status.go:176] multinode-090000 status: &{Name:multinode-090000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-090000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-090000 -n multinode-090000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-090000 -n multinode-090000: exit status 7 (30.360625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-090000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (2.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-090000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-090000 stop: (1.88338275s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-090000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-090000 status: exit status 7 (67.777709ms)

                                                
                                                
-- stdout --
	multinode-090000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-090000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-090000 status --alsologtostderr: exit status 7 (33.126208ms)

                                                
                                                
-- stdout --
	multinode-090000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:27:14.476898   20488 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:27:14.477036   20488 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:27:14.477039   20488 out.go:358] Setting ErrFile to fd 2...
	I0923 04:27:14.477042   20488 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:27:14.477181   20488 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:27:14.477292   20488 out.go:352] Setting JSON to false
	I0923 04:27:14.477301   20488 mustload.go:65] Loading cluster: multinode-090000
	I0923 04:27:14.477371   20488 notify.go:220] Checking for updates...
	I0923 04:27:14.477495   20488 config.go:182] Loaded profile config "multinode-090000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:27:14.477503   20488 status.go:174] checking status of multinode-090000 ...
	I0923 04:27:14.477750   20488 status.go:364] multinode-090000 host status = "Stopped" (err=<nil>)
	I0923 04:27:14.477754   20488 status.go:377] host is not running, skipping remaining checks
	I0923 04:27:14.477756   20488 status.go:176] multinode-090000 status: &{Name:multinode-090000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-090000 status --alsologtostderr": multinode-090000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-090000 status --alsologtostderr": multinode-090000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-090000 -n multinode-090000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-090000 -n multinode-090000: exit status 7 (30.82675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-090000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (2.02s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-090000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-090000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.188451959s)

                                                
                                                
-- stdout --
	* [multinode-090000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-090000" primary control-plane node in "multinode-090000" cluster
	* Restarting existing qemu2 VM for "multinode-090000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-090000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:27:14.538206   20492 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:27:14.538347   20492 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:27:14.538353   20492 out.go:358] Setting ErrFile to fd 2...
	I0923 04:27:14.538356   20492 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:27:14.538471   20492 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:27:14.539491   20492 out.go:352] Setting JSON to false
	I0923 04:27:14.555686   20492 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8805,"bootTime":1727082029,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:27:14.555754   20492 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:27:14.560210   20492 out.go:177] * [multinode-090000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:27:14.568382   20492 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 04:27:14.568437   20492 notify.go:220] Checking for updates...
	I0923 04:27:14.576282   20492 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:27:14.580292   20492 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:27:14.583294   20492 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:27:14.586318   20492 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	I0923 04:27:14.589320   20492 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 04:27:14.592494   20492 config.go:182] Loaded profile config "multinode-090000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:27:14.592758   20492 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:27:14.597282   20492 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 04:27:14.604209   20492 start.go:297] selected driver: qemu2
	I0923 04:27:14.604214   20492 start.go:901] validating driver "qemu2" against &{Name:multinode-090000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-090000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:27:14.604261   20492 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 04:27:14.606598   20492 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 04:27:14.606630   20492 cni.go:84] Creating CNI manager for ""
	I0923 04:27:14.606649   20492 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0923 04:27:14.606704   20492 start.go:340] cluster config:
	{Name:multinode-090000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-090000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:27:14.610327   20492 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:27:14.618119   20492 out.go:177] * Starting "multinode-090000" primary control-plane node in "multinode-090000" cluster
	I0923 04:27:14.622232   20492 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 04:27:14.622249   20492 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 04:27:14.622254   20492 cache.go:56] Caching tarball of preloaded images
	I0923 04:27:14.622311   20492 preload.go:172] Found /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 04:27:14.622317   20492 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 04:27:14.622381   20492 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/multinode-090000/config.json ...
	I0923 04:27:14.622815   20492 start.go:360] acquireMachinesLock for multinode-090000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:27:14.622842   20492 start.go:364] duration metric: took 21.584µs to acquireMachinesLock for "multinode-090000"
	I0923 04:27:14.622852   20492 start.go:96] Skipping create...Using existing machine configuration
	I0923 04:27:14.622857   20492 fix.go:54] fixHost starting: 
	I0923 04:27:14.622968   20492 fix.go:112] recreateIfNeeded on multinode-090000: state=Stopped err=<nil>
	W0923 04:27:14.622979   20492 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 04:27:14.627159   20492 out.go:177] * Restarting existing qemu2 VM for "multinode-090000" ...
	I0923 04:27:14.635319   20492 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:27:14.635360   20492 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/multinode-090000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/multinode-090000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/multinode-090000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:4e:8a:d5:38:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/multinode-090000/disk.qcow2
	I0923 04:27:14.637323   20492 main.go:141] libmachine: STDOUT: 
	I0923 04:27:14.637343   20492 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:27:14.637373   20492 fix.go:56] duration metric: took 14.514542ms for fixHost
	I0923 04:27:14.637377   20492 start.go:83] releasing machines lock for "multinode-090000", held for 14.529916ms
	W0923 04:27:14.637383   20492 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:27:14.637424   20492 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:27:14.637429   20492 start.go:729] Will try again in 5 seconds ...
	I0923 04:27:19.639881   20492 start.go:360] acquireMachinesLock for multinode-090000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:27:19.640436   20492 start.go:364] duration metric: took 445.625µs to acquireMachinesLock for "multinode-090000"
	I0923 04:27:19.640601   20492 start.go:96] Skipping create...Using existing machine configuration
	I0923 04:27:19.640621   20492 fix.go:54] fixHost starting: 
	I0923 04:27:19.641384   20492 fix.go:112] recreateIfNeeded on multinode-090000: state=Stopped err=<nil>
	W0923 04:27:19.641410   20492 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 04:27:19.644866   20492 out.go:177] * Restarting existing qemu2 VM for "multinode-090000" ...
	I0923 04:27:19.654999   20492 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:27:19.655225   20492 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/multinode-090000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/multinode-090000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/multinode-090000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:4e:8a:d5:38:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/multinode-090000/disk.qcow2
	I0923 04:27:19.664614   20492 main.go:141] libmachine: STDOUT: 
	I0923 04:27:19.664720   20492 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:27:19.664799   20492 fix.go:56] duration metric: took 24.179042ms for fixHost
	I0923 04:27:19.664815   20492 start.go:83] releasing machines lock for "multinode-090000", held for 24.357ms
	W0923 04:27:19.664977   20492 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-090000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-090000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:27:19.671919   20492 out.go:201] 
	W0923 04:27:19.674912   20492 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:27:19.674935   20492 out.go:270] * 
	* 
	W0923 04:27:19.677856   20492 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:27:19.685865   20492 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-090000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-090000 -n multinode-090000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-090000 -n multinode-090000: exit status 7 (70.203625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-090000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-090000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-090000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-090000-m01 --driver=qemu2 : exit status 80 (10.136949s)

                                                
                                                
-- stdout --
	* [multinode-090000-m01] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-090000-m01" primary control-plane node in "multinode-090000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-090000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-090000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-090000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-090000-m02 --driver=qemu2 : exit status 80 (10.105623292s)

                                                
                                                
-- stdout --
	* [multinode-090000-m02] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-090000-m02" primary control-plane node in "multinode-090000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-090000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-090000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-090000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-090000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-090000: exit status 83 (81.589833ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-090000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-090000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-090000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-090000 -n multinode-090000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-090000 -n multinode-090000: exit status 7 (30.971166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-090000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.47s)

                                                
                                    
x
+
TestPreload (10.07s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-043000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-043000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.9173895s)

                                                
                                                
-- stdout --
	* [test-preload-043000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-043000" primary control-plane node in "test-preload-043000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-043000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:27:40.377864   20559 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:27:40.377982   20559 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:27:40.377985   20559 out.go:358] Setting ErrFile to fd 2...
	I0923 04:27:40.377987   20559 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:27:40.378118   20559 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:27:40.379166   20559 out.go:352] Setting JSON to false
	I0923 04:27:40.395537   20559 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8831,"bootTime":1727082029,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:27:40.395614   20559 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:27:40.401862   20559 out.go:177] * [test-preload-043000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:27:40.405792   20559 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 04:27:40.405914   20559 notify.go:220] Checking for updates...
	I0923 04:27:40.412770   20559 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:27:40.415816   20559 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:27:40.419844   20559 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:27:40.423778   20559 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	I0923 04:27:40.426833   20559 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 04:27:40.430157   20559 config.go:182] Loaded profile config "multinode-090000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:27:40.430213   20559 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:27:40.434748   20559 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 04:27:40.441791   20559 start.go:297] selected driver: qemu2
	I0923 04:27:40.441797   20559 start.go:901] validating driver "qemu2" against <nil>
	I0923 04:27:40.441803   20559 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 04:27:40.444028   20559 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 04:27:40.446793   20559 out.go:177] * Automatically selected the socket_vmnet network
	I0923 04:27:40.449859   20559 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 04:27:40.449877   20559 cni.go:84] Creating CNI manager for ""
	I0923 04:27:40.449899   20559 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 04:27:40.449905   20559 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 04:27:40.449929   20559 start.go:340] cluster config:
	{Name:test-preload-043000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-043000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:27:40.453771   20559 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:27:40.462818   20559 out.go:177] * Starting "test-preload-043000" primary control-plane node in "test-preload-043000" cluster
	I0923 04:27:40.466849   20559 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0923 04:27:40.466936   20559 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/test-preload-043000/config.json ...
	I0923 04:27:40.466958   20559 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/test-preload-043000/config.json: {Name:mk05b9a0364c05bfc32051120d1614e47b8045fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 04:27:40.466960   20559 cache.go:107] acquiring lock: {Name:mk149f78b192b6198ebee9e7840058ae5a096258 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:27:40.466952   20559 cache.go:107] acquiring lock: {Name:mk68988f1f9604fcece744efbaebd52276b3cd87 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:27:40.467015   20559 cache.go:107] acquiring lock: {Name:mkb16bcc52fea00d3ae2dbaaed34deafd8460223 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:27:40.467098   20559 cache.go:107] acquiring lock: {Name:mk2f326c6ec66dc21647d9a067d37bb0fa0c3273 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:27:40.467172   20559 cache.go:107] acquiring lock: {Name:mkdd65e5894496dd574fb031b67213f0ca34b357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:27:40.467132   20559 cache.go:107] acquiring lock: {Name:mk340bdff56b9b2bbf897ecba4e3893e5801b2e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:27:40.467242   20559 cache.go:107] acquiring lock: {Name:mkb24d69182b231db385aa2182ad7626e0086e1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:27:40.467256   20559 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0923 04:27:40.467251   20559 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0923 04:27:40.467262   20559 cache.go:107] acquiring lock: {Name:mke992e8bda4bd4c8541f2dcc11746377511a0a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:27:40.467412   20559 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0923 04:27:40.467492   20559 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 04:27:40.467501   20559 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0923 04:27:40.467548   20559 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0923 04:27:40.467555   20559 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0923 04:27:40.467587   20559 start.go:360] acquireMachinesLock for test-preload-043000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:27:40.467590   20559 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0923 04:27:40.467650   20559 start.go:364] duration metric: took 38.666µs to acquireMachinesLock for "test-preload-043000"
	I0923 04:27:40.467666   20559 start.go:93] Provisioning new machine with config: &{Name:test-preload-043000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-043000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:27:40.467697   20559 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:27:40.474784   20559 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 04:27:40.479964   20559 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0923 04:27:40.479986   20559 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0923 04:27:40.480369   20559 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0923 04:27:40.480701   20559 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0923 04:27:40.482978   20559 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0923 04:27:40.483154   20559 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0923 04:27:40.483209   20559 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 04:27:40.483226   20559 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0923 04:27:40.493679   20559 start.go:159] libmachine.API.Create for "test-preload-043000" (driver="qemu2")
	I0923 04:27:40.493705   20559 client.go:168] LocalClient.Create starting
	I0923 04:27:40.493787   20559 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:27:40.493819   20559 main.go:141] libmachine: Decoding PEM data...
	I0923 04:27:40.493829   20559 main.go:141] libmachine: Parsing certificate...
	I0923 04:27:40.493876   20559 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:27:40.493901   20559 main.go:141] libmachine: Decoding PEM data...
	I0923 04:27:40.493911   20559 main.go:141] libmachine: Parsing certificate...
	I0923 04:27:40.494361   20559 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:27:40.662600   20559 main.go:141] libmachine: Creating SSH key...
	I0923 04:27:40.775855   20559 main.go:141] libmachine: Creating Disk image...
	I0923 04:27:40.775882   20559 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:27:40.776100   20559 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/test-preload-043000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/test-preload-043000/disk.qcow2
	I0923 04:27:40.785946   20559 main.go:141] libmachine: STDOUT: 
	I0923 04:27:40.785967   20559 main.go:141] libmachine: STDERR: 
	I0923 04:27:40.786032   20559 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/test-preload-043000/disk.qcow2 +20000M
	I0923 04:27:40.794997   20559 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:27:40.795018   20559 main.go:141] libmachine: STDERR: 
	I0923 04:27:40.795035   20559 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/test-preload-043000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/test-preload-043000/disk.qcow2
	I0923 04:27:40.795041   20559 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:27:40.795054   20559 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:27:40.795079   20559 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/test-preload-043000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/test-preload-043000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/test-preload-043000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:c0:72:6c:bd:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/test-preload-043000/disk.qcow2
	I0923 04:27:40.797158   20559 main.go:141] libmachine: STDOUT: 
	I0923 04:27:40.797176   20559 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:27:40.797197   20559 client.go:171] duration metric: took 303.485917ms to LocalClient.Create
	I0923 04:27:40.864660   20559 cache.go:162] opening:  /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0923 04:27:40.866031   20559 cache.go:162] opening:  /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0923 04:27:40.871570   20559 cache.go:162] opening:  /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0923 04:27:40.876279   20559 cache.go:162] opening:  /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	W0923 04:27:40.898049   20559 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0923 04:27:40.898075   20559 cache.go:162] opening:  /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0923 04:27:40.916649   20559 cache.go:162] opening:  /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0923 04:27:40.985885   20559 cache.go:162] opening:  /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0923 04:27:40.986968   20559 cache.go:157] /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0923 04:27:40.986987   20559 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 519.894583ms
	I0923 04:27:40.987002   20559 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0923 04:27:41.561116   20559 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0923 04:27:41.561208   20559 cache.go:162] opening:  /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0923 04:27:42.051091   20559 cache.go:157] /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0923 04:27:42.051138   20559 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.584175375s
	I0923 04:27:42.051165   20559 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0923 04:27:42.797456   20559 start.go:128] duration metric: took 2.329732542s to createHost
	I0923 04:27:42.797505   20559 start.go:83] releasing machines lock for "test-preload-043000", held for 2.329840916s
	W0923 04:27:42.797576   20559 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:27:42.817694   20559 out.go:177] * Deleting "test-preload-043000" in qemu2 ...
	W0923 04:27:42.855633   20559 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:27:42.855667   20559 start.go:729] Will try again in 5 seconds ...
	I0923 04:27:43.205480   20559 cache.go:157] /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0923 04:27:43.205527   20559 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.738252458s
	I0923 04:27:43.205550   20559 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0923 04:27:43.728590   20559 cache.go:157] /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0923 04:27:43.728649   20559 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.261666917s
	I0923 04:27:43.728677   20559 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0923 04:27:44.866364   20559 cache.go:157] /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0923 04:27:44.866406   20559 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.399240958s
	I0923 04:27:44.866433   20559 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0923 04:27:44.923637   20559 cache.go:157] /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0923 04:27:44.923686   20559 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.456733958s
	I0923 04:27:44.923711   20559 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0923 04:27:46.610290   20559 cache.go:157] /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0923 04:27:46.610346   20559 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.14311175s
	I0923 04:27:46.610371   20559 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0923 04:27:47.855893   20559 start.go:360] acquireMachinesLock for test-preload-043000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:27:47.856304   20559 start.go:364] duration metric: took 333.916µs to acquireMachinesLock for "test-preload-043000"
	I0923 04:27:47.856423   20559 start.go:93] Provisioning new machine with config: &{Name:test-preload-043000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-043000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:27:47.856706   20559 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:27:47.863285   20559 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 04:27:47.914032   20559 start.go:159] libmachine.API.Create for "test-preload-043000" (driver="qemu2")
	I0923 04:27:47.914086   20559 client.go:168] LocalClient.Create starting
	I0923 04:27:47.914213   20559 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:27:47.914277   20559 main.go:141] libmachine: Decoding PEM data...
	I0923 04:27:47.914292   20559 main.go:141] libmachine: Parsing certificate...
	I0923 04:27:47.914346   20559 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:27:47.914394   20559 main.go:141] libmachine: Decoding PEM data...
	I0923 04:27:47.914409   20559 main.go:141] libmachine: Parsing certificate...
	I0923 04:27:47.914929   20559 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:27:48.086669   20559 main.go:141] libmachine: Creating SSH key...
	I0923 04:27:48.190955   20559 main.go:141] libmachine: Creating Disk image...
	I0923 04:27:48.190961   20559 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:27:48.191163   20559 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/test-preload-043000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/test-preload-043000/disk.qcow2
	I0923 04:27:48.200797   20559 main.go:141] libmachine: STDOUT: 
	I0923 04:27:48.200814   20559 main.go:141] libmachine: STDERR: 
	I0923 04:27:48.200870   20559 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/test-preload-043000/disk.qcow2 +20000M
	I0923 04:27:48.208949   20559 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:27:48.208977   20559 main.go:141] libmachine: STDERR: 
	I0923 04:27:48.208995   20559 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/test-preload-043000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/test-preload-043000/disk.qcow2
	I0923 04:27:48.209000   20559 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:27:48.209009   20559 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:27:48.209051   20559 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/test-preload-043000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/test-preload-043000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/test-preload-043000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:f2:e2:2d:37:e4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/test-preload-043000/disk.qcow2
	I0923 04:27:48.210884   20559 main.go:141] libmachine: STDOUT: 
	I0923 04:27:48.210901   20559 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:27:48.210914   20559 client.go:171] duration metric: took 296.820791ms to LocalClient.Create
	I0923 04:27:50.211682   20559 start.go:128] duration metric: took 2.35492125s to createHost
	I0923 04:27:50.211757   20559 start.go:83] releasing machines lock for "test-preload-043000", held for 2.355426833s
	W0923 04:27:50.212086   20559 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-043000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-043000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:27:50.229958   20559 out.go:201] 
	W0923 04:27:50.234778   20559 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:27:50.234816   20559 out.go:270] * 
	* 
	W0923 04:27:50.237308   20559 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:27:50.251574   20559 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-043000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-09-23 04:27:50.26941 -0700 PDT m=+698.270486293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-043000 -n test-preload-043000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-043000 -n test-preload-043000: exit status 7 (65.903ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-043000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-043000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-043000
--- FAIL: TestPreload (10.07s)

                                                
                                    
x
+
TestScheduledStopUnix (10.08s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-392000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-392000 --memory=2048 --driver=qemu2 : exit status 80 (9.928142583s)

                                                
                                                
-- stdout --
	* [scheduled-stop-392000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-392000" primary control-plane node in "scheduled-stop-392000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-392000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-392000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-392000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-392000" primary control-plane node in "scheduled-stop-392000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-392000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-392000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-09-23 04:28:00.346239 -0700 PDT m=+708.347335876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-392000 -n scheduled-stop-392000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-392000 -n scheduled-stop-392000: exit status 7 (68.592666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-392000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-392000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-392000
--- FAIL: TestScheduledStopUnix (10.08s)

                                                
                                    
x
+
TestSkaffold (13.44s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe472014643 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe472014643 version: (1.065660541s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-867000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-867000 --memory=2600 --driver=qemu2 : exit status 80 (10.813417375s)

                                                
                                                
-- stdout --
	* [skaffold-867000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-867000" primary control-plane node in "skaffold-867000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-867000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-867000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-867000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-867000" primary control-plane node in "skaffold-867000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-867000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-867000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-09-23 04:28:13.785506 -0700 PDT m=+721.786648210
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-867000 -n skaffold-867000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-867000 -n skaffold-867000: exit status 7 (61.083625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-867000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-867000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-867000
--- FAIL: TestSkaffold (13.44s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (629.02s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.618153535 start -p running-upgrade-903000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.618153535 start -p running-upgrade-903000 --memory=2200 --vm-driver=qemu2 : (1m1.155263042s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-903000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-903000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m54.532180125s)

                                                
                                                
-- stdout --
	* [running-upgrade-903000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-903000" primary control-plane node in "running-upgrade-903000" cluster
	* Updating the running qemu2 "running-upgrade-903000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:29:38.065258   20917 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:29:38.065393   20917 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:29:38.065396   20917 out.go:358] Setting ErrFile to fd 2...
	I0923 04:29:38.065398   20917 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:29:38.065544   20917 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:29:38.066623   20917 out.go:352] Setting JSON to false
	I0923 04:29:38.083952   20917 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8949,"bootTime":1727082029,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:29:38.084032   20917 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:29:38.089153   20917 out.go:177] * [running-upgrade-903000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:29:38.095129   20917 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 04:29:38.095169   20917 notify.go:220] Checking for updates...
	I0923 04:29:38.103153   20917 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:29:38.107108   20917 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:29:38.110121   20917 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:29:38.113177   20917 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	I0923 04:29:38.116068   20917 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 04:29:38.119360   20917 config.go:182] Loaded profile config "running-upgrade-903000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 04:29:38.122092   20917 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0923 04:29:38.125077   20917 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:29:38.128092   20917 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 04:29:38.134061   20917 start.go:297] selected driver: qemu2
	I0923 04:29:38.134065   20917 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-903000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53371 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-903000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0923 04:29:38.134105   20917 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 04:29:38.136331   20917 cni.go:84] Creating CNI manager for ""
	I0923 04:29:38.136362   20917 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 04:29:38.136398   20917 start.go:340] cluster config:
	{Name:running-upgrade-903000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53371 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-903000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0923 04:29:38.136447   20917 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:29:38.144111   20917 out.go:177] * Starting "running-upgrade-903000" primary control-plane node in "running-upgrade-903000" cluster
	I0923 04:29:38.147151   20917 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0923 04:29:38.147164   20917 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0923 04:29:38.147167   20917 cache.go:56] Caching tarball of preloaded images
	I0923 04:29:38.147213   20917 preload.go:172] Found /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 04:29:38.147218   20917 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0923 04:29:38.147263   20917 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/running-upgrade-903000/config.json ...
	I0923 04:29:38.147579   20917 start.go:360] acquireMachinesLock for running-upgrade-903000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:29:50.636497   20917 start.go:364] duration metric: took 12.48896475s to acquireMachinesLock for "running-upgrade-903000"
	I0923 04:29:50.636523   20917 start.go:96] Skipping create...Using existing machine configuration
	I0923 04:29:50.636533   20917 fix.go:54] fixHost starting: 
	I0923 04:29:50.637426   20917 fix.go:112] recreateIfNeeded on running-upgrade-903000: state=Running err=<nil>
	W0923 04:29:50.637439   20917 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 04:29:50.704865   20917 out.go:177] * Updating the running qemu2 "running-upgrade-903000" VM ...
	I0923 04:29:50.712712   20917 machine.go:93] provisionDockerMachine start ...
	I0923 04:29:50.712825   20917 main.go:141] libmachine: Using SSH client type: native
	I0923 04:29:50.712994   20917 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102431c00] 0x102434440 <nil>  [] 0s} localhost 53278 <nil> <nil>}
	I0923 04:29:50.713001   20917 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 04:29:50.777516   20917 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-903000
	
	I0923 04:29:50.777539   20917 buildroot.go:166] provisioning hostname "running-upgrade-903000"
	I0923 04:29:50.777606   20917 main.go:141] libmachine: Using SSH client type: native
	I0923 04:29:50.777732   20917 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102431c00] 0x102434440 <nil>  [] 0s} localhost 53278 <nil> <nil>}
	I0923 04:29:50.777743   20917 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-903000 && echo "running-upgrade-903000" | sudo tee /etc/hostname
	I0923 04:29:50.840797   20917 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-903000
	
	I0923 04:29:50.840857   20917 main.go:141] libmachine: Using SSH client type: native
	I0923 04:29:50.840962   20917 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102431c00] 0x102434440 <nil>  [] 0s} localhost 53278 <nil> <nil>}
	I0923 04:29:50.840970   20917 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-903000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-903000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-903000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 04:29:50.899081   20917 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 04:29:50.899094   20917 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19690-18362/.minikube CaCertPath:/Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19690-18362/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19690-18362/.minikube}
	I0923 04:29:50.899114   20917 buildroot.go:174] setting up certificates
	I0923 04:29:50.899120   20917 provision.go:84] configureAuth start
	I0923 04:29:50.899126   20917 provision.go:143] copyHostCerts
	I0923 04:29:50.899182   20917 exec_runner.go:144] found /Users/jenkins/minikube-integration/19690-18362/.minikube/cert.pem, removing ...
	I0923 04:29:50.899189   20917 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19690-18362/.minikube/cert.pem
	I0923 04:29:50.899298   20917 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19690-18362/.minikube/cert.pem (1123 bytes)
	I0923 04:29:50.899472   20917 exec_runner.go:144] found /Users/jenkins/minikube-integration/19690-18362/.minikube/key.pem, removing ...
	I0923 04:29:50.899477   20917 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19690-18362/.minikube/key.pem
	I0923 04:29:50.899523   20917 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19690-18362/.minikube/key.pem (1675 bytes)
	I0923 04:29:50.899622   20917 exec_runner.go:144] found /Users/jenkins/minikube-integration/19690-18362/.minikube/ca.pem, removing ...
	I0923 04:29:50.899628   20917 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19690-18362/.minikube/ca.pem
	I0923 04:29:50.899666   20917 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19690-18362/.minikube/ca.pem (1078 bytes)
	I0923 04:29:50.899748   20917 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-903000 san=[127.0.0.1 localhost minikube running-upgrade-903000]
	I0923 04:29:51.034853   20917 provision.go:177] copyRemoteCerts
	I0923 04:29:51.034891   20917 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 04:29:51.034902   20917 sshutil.go:53] new ssh client: &{IP:localhost Port:53278 SSHKeyPath:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/running-upgrade-903000/id_rsa Username:docker}
	I0923 04:29:51.067127   20917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 04:29:51.074111   20917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0923 04:29:51.081104   20917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 04:29:51.088279   20917 provision.go:87] duration metric: took 189.148709ms to configureAuth
	I0923 04:29:51.088288   20917 buildroot.go:189] setting minikube options for container-runtime
	I0923 04:29:51.088405   20917 config.go:182] Loaded profile config "running-upgrade-903000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 04:29:51.088453   20917 main.go:141] libmachine: Using SSH client type: native
	I0923 04:29:51.088541   20917 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102431c00] 0x102434440 <nil>  [] 0s} localhost 53278 <nil> <nil>}
	I0923 04:29:51.088546   20917 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 04:29:51.148823   20917 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0923 04:29:51.148834   20917 buildroot.go:70] root file system type: tmpfs
	I0923 04:29:51.148886   20917 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 04:29:51.148953   20917 main.go:141] libmachine: Using SSH client type: native
	I0923 04:29:51.149067   20917 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102431c00] 0x102434440 <nil>  [] 0s} localhost 53278 <nil> <nil>}
	I0923 04:29:51.149099   20917 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 04:29:51.213622   20917 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 04:29:51.213686   20917 main.go:141] libmachine: Using SSH client type: native
	I0923 04:29:51.213814   20917 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102431c00] 0x102434440 <nil>  [] 0s} localhost 53278 <nil> <nil>}
	I0923 04:29:51.213823   20917 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 04:29:51.272511   20917 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 04:29:51.272525   20917 machine.go:96] duration metric: took 559.802709ms to provisionDockerMachine
	I0923 04:29:51.272532   20917 start.go:293] postStartSetup for "running-upgrade-903000" (driver="qemu2")
	I0923 04:29:51.272538   20917 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 04:29:51.272603   20917 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 04:29:51.272612   20917 sshutil.go:53] new ssh client: &{IP:localhost Port:53278 SSHKeyPath:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/running-upgrade-903000/id_rsa Username:docker}
	I0923 04:29:51.303630   20917 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 04:29:51.305044   20917 info.go:137] Remote host: Buildroot 2021.02.12
	I0923 04:29:51.305051   20917 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19690-18362/.minikube/addons for local assets ...
	I0923 04:29:51.305131   20917 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19690-18362/.minikube/files for local assets ...
	I0923 04:29:51.305223   20917 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19690-18362/.minikube/files/etc/ssl/certs/189142.pem -> 189142.pem in /etc/ssl/certs
	I0923 04:29:51.305326   20917 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 04:29:51.308610   20917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19690-18362/.minikube/files/etc/ssl/certs/189142.pem --> /etc/ssl/certs/189142.pem (1708 bytes)
	I0923 04:29:51.315825   20917 start.go:296] duration metric: took 43.287875ms for postStartSetup
	I0923 04:29:51.315839   20917 fix.go:56] duration metric: took 679.312291ms for fixHost
	I0923 04:29:51.315882   20917 main.go:141] libmachine: Using SSH client type: native
	I0923 04:29:51.316013   20917 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102431c00] 0x102434440 <nil>  [] 0s} localhost 53278 <nil> <nil>}
	I0923 04:29:51.316019   20917 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 04:29:51.377079   20917 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727090991.481340061
	
	I0923 04:29:51.377090   20917 fix.go:216] guest clock: 1727090991.481340061
	I0923 04:29:51.377094   20917 fix.go:229] Guest: 2024-09-23 04:29:51.481340061 -0700 PDT Remote: 2024-09-23 04:29:51.315841 -0700 PDT m=+13.271920626 (delta=165.499061ms)
	I0923 04:29:51.377106   20917 fix.go:200] guest clock delta is within tolerance: 165.499061ms
	I0923 04:29:51.377109   20917 start.go:83] releasing machines lock for "running-upgrade-903000", held for 740.605333ms
	I0923 04:29:51.377189   20917 ssh_runner.go:195] Run: cat /version.json
	I0923 04:29:51.377198   20917 sshutil.go:53] new ssh client: &{IP:localhost Port:53278 SSHKeyPath:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/running-upgrade-903000/id_rsa Username:docker}
	I0923 04:29:51.377241   20917 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 04:29:51.377263   20917 sshutil.go:53] new ssh client: &{IP:localhost Port:53278 SSHKeyPath:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/running-upgrade-903000/id_rsa Username:docker}
	W0923 04:29:51.377967   20917 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:53518->127.0.0.1:53278: write: broken pipe
	I0923 04:29:51.377987   20917 retry.go:31] will retry after 155.004021ms: ssh: handshake failed: write tcp 127.0.0.1:53518->127.0.0.1:53278: write: broken pipe
	W0923 04:29:51.565931   20917 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0923 04:29:51.566020   20917 ssh_runner.go:195] Run: systemctl --version
	I0923 04:29:51.568189   20917 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 04:29:51.571107   20917 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 04:29:51.571146   20917 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0923 04:29:51.574807   20917 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0923 04:29:51.582795   20917 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 04:29:51.582809   20917 start.go:495] detecting cgroup driver to use...
	I0923 04:29:51.582883   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 04:29:51.589030   20917 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0923 04:29:51.592145   20917 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 04:29:51.595274   20917 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 04:29:51.595299   20917 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 04:29:51.598731   20917 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 04:29:51.602498   20917 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 04:29:51.605439   20917 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 04:29:51.609025   20917 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 04:29:51.613787   20917 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 04:29:51.617542   20917 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 04:29:51.620928   20917 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 04:29:51.624302   20917 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 04:29:51.627477   20917 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 04:29:51.631363   20917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 04:29:51.741374   20917 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 04:29:51.767028   20917 start.go:495] detecting cgroup driver to use...
	I0923 04:29:51.767111   20917 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 04:29:51.772233   20917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 04:29:51.783309   20917 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 04:29:51.791614   20917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 04:29:51.799872   20917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 04:29:51.805435   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 04:29:51.810831   20917 ssh_runner.go:195] Run: which cri-dockerd
	I0923 04:29:51.812020   20917 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 04:29:51.814381   20917 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0923 04:29:51.819260   20917 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 04:29:51.932785   20917 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 04:29:52.032479   20917 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 04:29:52.032527   20917 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 04:29:52.038057   20917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 04:29:52.144181   20917 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 04:30:08.823024   20917 ssh_runner.go:235] Completed: sudo systemctl restart docker: (16.678902375s)
	I0923 04:30:08.823110   20917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 04:30:08.827622   20917 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0923 04:30:08.835881   20917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 04:30:08.840986   20917 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0923 04:30:08.920613   20917 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0923 04:30:09.003597   20917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 04:30:09.085181   20917 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0923 04:30:09.092005   20917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 04:30:09.096814   20917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 04:30:09.188143   20917 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0923 04:30:09.226789   20917 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 04:30:09.226885   20917 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0923 04:30:09.229336   20917 start.go:563] Will wait 60s for crictl version
	I0923 04:30:09.229404   20917 ssh_runner.go:195] Run: which crictl
	I0923 04:30:09.231200   20917 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 04:30:09.243110   20917 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0923 04:30:09.243199   20917 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 04:30:09.256415   20917 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 04:30:09.273804   20917 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0923 04:30:09.273953   20917 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0923 04:30:09.275373   20917 kubeadm.go:883] updating cluster {Name:running-upgrade-903000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53371 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-903000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0923 04:30:09.275415   20917 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0923 04:30:09.275478   20917 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 04:30:09.285986   20917 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 04:30:09.285995   20917 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0923 04:30:09.286048   20917 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0923 04:30:09.289330   20917 ssh_runner.go:195] Run: which lz4
	I0923 04:30:09.290818   20917 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0923 04:30:09.292267   20917 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0923 04:30:09.292278   20917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0923 04:30:10.236598   20917 docker.go:649] duration metric: took 945.832958ms to copy over tarball
	I0923 04:30:10.236680   20917 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0923 04:30:11.668711   20917 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.432023583s)
	I0923 04:30:11.668728   20917 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0923 04:30:11.684707   20917 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0923 04:30:11.688238   20917 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0923 04:30:11.693379   20917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 04:30:11.781615   20917 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 04:30:13.013580   20917 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.231954833s)
	I0923 04:30:13.013680   20917 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 04:30:13.029437   20917 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 04:30:13.029446   20917 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0923 04:30:13.029451   20917 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0923 04:30:13.033715   20917 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 04:30:13.035773   20917 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0923 04:30:13.038981   20917 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0923 04:30:13.039119   20917 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 04:30:13.040758   20917 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0923 04:30:13.041184   20917 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0923 04:30:13.043084   20917 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0923 04:30:13.042741   20917 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0923 04:30:13.044802   20917 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0923 04:30:13.044822   20917 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0923 04:30:13.046489   20917 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0923 04:30:13.046701   20917 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0923 04:30:13.049090   20917 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0923 04:30:13.049384   20917 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0923 04:30:13.051215   20917 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0923 04:30:13.051953   20917 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0923 04:30:13.470405   20917 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0923 04:30:13.470976   20917 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0923 04:30:13.471227   20917 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0923 04:30:13.482271   20917 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0923 04:30:13.482297   20917 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0923 04:30:13.482365   20917 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0923 04:30:13.489247   20917 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0923 04:30:13.489254   20917 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0923 04:30:13.489267   20917 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0923 04:30:13.489267   20917 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0923 04:30:13.489328   20917 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0923 04:30:13.489349   20917 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0923 04:30:13.499587   20917 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0923 04:30:13.502880   20917 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0923 04:30:13.507330   20917 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0923 04:30:13.507332   20917 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0923 04:30:13.508991   20917 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0923 04:30:13.516252   20917 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0923 04:30:13.516275   20917 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0923 04:30:13.516340   20917 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0923 04:30:13.525677   20917 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0923 04:30:13.525700   20917 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0923 04:30:13.525766   20917 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0923 04:30:13.527982   20917 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0923 04:30:13.528551   20917 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0923 04:30:13.541470   20917 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0923 04:30:13.541535   20917 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0923 04:30:13.541551   20917 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0923 04:30:13.541610   20917 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0923 04:30:13.553224   20917 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0923 04:30:13.553361   20917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0923 04:30:13.555373   20917 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0923 04:30:13.555385   20917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	W0923 04:30:13.555609   20917 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0923 04:30:13.555735   20917 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0923 04:30:13.562665   20917 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0923 04:30:13.562678   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0923 04:30:13.573764   20917 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0923 04:30:13.573789   20917 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0923 04:30:13.573855   20917 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0923 04:30:13.601419   20917 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0923 04:30:13.601452   20917 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0923 04:30:13.601575   20917 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0923 04:30:13.603032   20917 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0923 04:30:13.603049   20917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0923 04:30:13.645516   20917 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0923 04:30:13.645529   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0923 04:30:13.684722   20917 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0923 04:30:13.815583   20917 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0923 04:30:13.815713   20917 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 04:30:13.827424   20917 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0923 04:30:13.827450   20917 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 04:30:13.827519   20917 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 04:30:13.839206   20917 cache_images.go:92] duration metric: took 809.748208ms to LoadCachedImages
	W0923 04:30:13.839249   20917 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0923 04:30:13.839254   20917 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0923 04:30:13.839298   20917 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-903000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-903000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 04:30:13.839374   20917 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0923 04:30:13.853528   20917 cni.go:84] Creating CNI manager for ""
	I0923 04:30:13.853539   20917 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 04:30:13.853545   20917 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 04:30:13.853554   20917 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-903000 NodeName:running-upgrade-903000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 04:30:13.853618   20917 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-903000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 04:30:13.853681   20917 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0923 04:30:13.857255   20917 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 04:30:13.857291   20917 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 04:30:13.860391   20917 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0923 04:30:13.865822   20917 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 04:30:13.871314   20917 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0923 04:30:13.877155   20917 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0923 04:30:13.878967   20917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 04:30:13.953861   20917 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 04:30:13.959650   20917 certs.go:68] Setting up /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/running-upgrade-903000 for IP: 10.0.2.15
	I0923 04:30:13.959658   20917 certs.go:194] generating shared ca certs ...
	I0923 04:30:13.959667   20917 certs.go:226] acquiring lock for ca certs: {Name:mkf84bedb9b35f23af77b237ccfe7d150b52a82b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 04:30:13.959820   20917 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19690-18362/.minikube/ca.key
	I0923 04:30:13.959868   20917 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19690-18362/.minikube/proxy-client-ca.key
	I0923 04:30:13.959872   20917 certs.go:256] generating profile certs ...
	I0923 04:30:13.959959   20917 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/running-upgrade-903000/client.key
	I0923 04:30:13.959986   20917 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/running-upgrade-903000/apiserver.key.23d199f5
	I0923 04:30:13.959997   20917 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/running-upgrade-903000/apiserver.crt.23d199f5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0923 04:30:14.090981   20917 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/running-upgrade-903000/apiserver.crt.23d199f5 ...
	I0923 04:30:14.090998   20917 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/running-upgrade-903000/apiserver.crt.23d199f5: {Name:mk6a393e85a8361192900d9b7eb789e7f764a08c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 04:30:14.091453   20917 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/running-upgrade-903000/apiserver.key.23d199f5 ...
	I0923 04:30:14.091464   20917 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/running-upgrade-903000/apiserver.key.23d199f5: {Name:mkfb1f56c2807f468675768569ecc64b62a47b1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 04:30:14.091601   20917 certs.go:381] copying /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/running-upgrade-903000/apiserver.crt.23d199f5 -> /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/running-upgrade-903000/apiserver.crt
	I0923 04:30:14.091755   20917 certs.go:385] copying /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/running-upgrade-903000/apiserver.key.23d199f5 -> /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/running-upgrade-903000/apiserver.key
	I0923 04:30:14.091919   20917 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/running-upgrade-903000/proxy-client.key
	I0923 04:30:14.092054   20917 certs.go:484] found cert: /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/18914.pem (1338 bytes)
	W0923 04:30:14.092089   20917 certs.go:480] ignoring /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/18914_empty.pem, impossibly tiny 0 bytes
	I0923 04:30:14.092095   20917 certs.go:484] found cert: /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca-key.pem (1675 bytes)
	I0923 04:30:14.092121   20917 certs.go:484] found cert: /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem (1078 bytes)
	I0923 04:30:14.092147   20917 certs.go:484] found cert: /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem (1123 bytes)
	I0923 04:30:14.092172   20917 certs.go:484] found cert: /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/key.pem (1675 bytes)
	I0923 04:30:14.092227   20917 certs.go:484] found cert: /Users/jenkins/minikube-integration/19690-18362/.minikube/files/etc/ssl/certs/189142.pem (1708 bytes)
	I0923 04:30:14.092601   20917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19690-18362/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 04:30:14.100800   20917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19690-18362/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 04:30:14.108694   20917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19690-18362/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 04:30:14.116116   20917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19690-18362/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 04:30:14.124088   20917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/running-upgrade-903000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0923 04:30:14.130848   20917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/running-upgrade-903000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 04:30:14.137504   20917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/running-upgrade-903000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 04:30:14.145221   20917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/running-upgrade-903000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 04:30:14.152771   20917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19690-18362/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 04:30:14.159917   20917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/18914.pem --> /usr/share/ca-certificates/18914.pem (1338 bytes)
	I0923 04:30:14.166848   20917 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19690-18362/.minikube/files/etc/ssl/certs/189142.pem --> /usr/share/ca-certificates/189142.pem (1708 bytes)
	I0923 04:30:14.173413   20917 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 04:30:14.178442   20917 ssh_runner.go:195] Run: openssl version
	I0923 04:30:14.180116   20917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18914.pem && ln -fs /usr/share/ca-certificates/18914.pem /etc/ssl/certs/18914.pem"
	I0923 04:30:14.183057   20917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18914.pem
	I0923 04:30:14.184481   20917 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 11:17 /usr/share/ca-certificates/18914.pem
	I0923 04:30:14.184508   20917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18914.pem
	I0923 04:30:14.186511   20917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18914.pem /etc/ssl/certs/51391683.0"
	I0923 04:30:14.189418   20917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/189142.pem && ln -fs /usr/share/ca-certificates/189142.pem /etc/ssl/certs/189142.pem"
	I0923 04:30:14.192614   20917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/189142.pem
	I0923 04:30:14.194245   20917 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 11:17 /usr/share/ca-certificates/189142.pem
	I0923 04:30:14.194270   20917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/189142.pem
	I0923 04:30:14.196123   20917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/189142.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 04:30:14.199320   20917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 04:30:14.202872   20917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 04:30:14.204532   20917 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 11:29 /usr/share/ca-certificates/minikubeCA.pem
	I0923 04:30:14.204558   20917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 04:30:14.206340   20917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 04:30:14.209062   20917 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 04:30:14.210632   20917 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0923 04:30:14.212674   20917 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0923 04:30:14.214891   20917 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0923 04:30:14.217036   20917 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0923 04:30:14.219098   20917 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0923 04:30:14.221080   20917 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0923 04:30:14.223146   20917 kubeadm.go:392] StartCluster: {Name:running-upgrade-903000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53371 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-903000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0923 04:30:14.223223   20917 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 04:30:14.234578   20917 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 04:30:14.238092   20917 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0923 04:30:14.238101   20917 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0923 04:30:14.238133   20917 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0923 04:30:14.241602   20917 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0923 04:30:14.241920   20917 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-903000" does not appear in /Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:30:14.242020   20917 kubeconfig.go:62] /Users/jenkins/minikube-integration/19690-18362/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-903000" cluster setting kubeconfig missing "running-upgrade-903000" context setting]
	I0923 04:30:14.242225   20917 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19690-18362/kubeconfig: {Name:mke35d42fdea9892a3eb00f2ea9c8fc1f44681bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 04:30:14.242655   20917 kapi.go:59] client config for running-upgrade-903000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/running-upgrade-903000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/running-upgrade-903000/client.key", CAFile:"/Users/jenkins/minikube-integration/19690-18362/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103a0a030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 04:30:14.242995   20917 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0923 04:30:14.245915   20917 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-903000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0923 04:30:14.245921   20917 kubeadm.go:1160] stopping kube-system containers ...
	I0923 04:30:14.245971   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 04:30:14.257921   20917 docker.go:483] Stopping containers: [25f4509ca9dd be05ca549694 83303c6938f1 2b94a8e4df7f 7327067cf282 1915b352164d 3ba9dfb1baa7 d51a449aa2c5 dbadacc5b1c8 18a4502482f0 cf19ba1df3cc 23c22d92b606 08bd1213e480 20778b8fa89c a0c59653f70b 3d03a55f400e d431418bdb08 40e07760da99 68cfde6ed535 d064ae4d4cf0 f881a7b5b320 acc124b7eb11 5642cd7d8ea5 e5a7a84a4a79 5fd6bcf637db 4dd27927ee7b]
	I0923 04:30:14.258002   20917 ssh_runner.go:195] Run: docker stop 25f4509ca9dd be05ca549694 83303c6938f1 2b94a8e4df7f 7327067cf282 1915b352164d 3ba9dfb1baa7 d51a449aa2c5 dbadacc5b1c8 18a4502482f0 cf19ba1df3cc 23c22d92b606 08bd1213e480 20778b8fa89c a0c59653f70b 3d03a55f400e d431418bdb08 40e07760da99 68cfde6ed535 d064ae4d4cf0 f881a7b5b320 acc124b7eb11 5642cd7d8ea5 e5a7a84a4a79 5fd6bcf637db 4dd27927ee7b
	I0923 04:30:14.270397   20917 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0923 04:30:14.375497   20917 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 04:30:14.378852   20917 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Sep 23 11:29 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Sep 23 11:29 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Sep 23 11:29 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Sep 23 11:29 /etc/kubernetes/scheduler.conf
	
	I0923 04:30:14.378896   20917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53371 /etc/kubernetes/admin.conf
	I0923 04:30:14.381489   20917 kubeadm.go:163] "https://control-plane.minikube.internal:53371" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53371 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0923 04:30:14.381519   20917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 04:30:14.384801   20917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53371 /etc/kubernetes/kubelet.conf
	I0923 04:30:14.388089   20917 kubeadm.go:163] "https://control-plane.minikube.internal:53371" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53371 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0923 04:30:14.388115   20917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 04:30:14.390970   20917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53371 /etc/kubernetes/controller-manager.conf
	I0923 04:30:14.393598   20917 kubeadm.go:163] "https://control-plane.minikube.internal:53371" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53371 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0923 04:30:14.393624   20917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 04:30:14.397035   20917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53371 /etc/kubernetes/scheduler.conf
	I0923 04:30:14.400584   20917 kubeadm.go:163] "https://control-plane.minikube.internal:53371" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53371 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0923 04:30:14.400613   20917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 04:30:14.404111   20917 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 04:30:14.406933   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 04:30:14.449762   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 04:30:14.808778   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0923 04:30:15.022268   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 04:30:15.049143   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0923 04:30:15.073896   20917 api_server.go:52] waiting for apiserver process to appear ...
	I0923 04:30:15.073985   20917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 04:30:15.576198   20917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 04:30:16.076078   20917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 04:30:16.576084   20917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 04:30:17.075019   20917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 04:30:17.576019   20917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 04:30:18.075687   20917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 04:30:18.576075   20917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 04:30:19.076084   20917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 04:30:19.576074   20917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 04:30:20.076132   20917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 04:30:20.576124   20917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 04:30:20.581207   20917 api_server.go:72] duration metric: took 5.50733775s to wait for apiserver process to appear ...
	I0923 04:30:20.581217   20917 api_server.go:88] waiting for apiserver healthz status ...
	I0923 04:30:20.581238   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:30:25.583299   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:30:25.583360   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:30:30.583915   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:30:30.583959   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:30:35.584496   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:30:35.584522   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:30:40.585183   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:30:40.585289   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:30:45.586784   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:30:45.586829   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:30:50.588160   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:30:50.588211   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:30:55.590361   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:30:55.590392   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:31:00.591658   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:31:00.591702   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:31:05.593996   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:31:05.594038   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:31:10.596423   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:31:10.596502   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:31:15.599117   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:31:15.599163   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:31:20.600205   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:31:20.600497   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:31:20.620917   20917 logs.go:276] 2 containers: [93e4d9301086 cf19ba1df3cc]
	I0923 04:31:20.621029   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:31:20.635223   20917 logs.go:276] 2 containers: [a8024eb2d8d3 7327067cf282]
	I0923 04:31:20.635362   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:31:20.647255   20917 logs.go:276] 2 containers: [b6c7f17b7630 a0c59653f70b]
	I0923 04:31:20.647350   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:31:20.658523   20917 logs.go:276] 2 containers: [68ba318290a3 3ba9dfb1baa7]
	I0923 04:31:20.658605   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:31:20.670501   20917 logs.go:276] 2 containers: [b6032b1a7d80 23c22d92b606]
	I0923 04:31:20.670567   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:31:20.682050   20917 logs.go:276] 2 containers: [058b9aa76c09 83303c6938f1]
	I0923 04:31:20.682128   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:31:20.692618   20917 logs.go:276] 0 containers: []
	W0923 04:31:20.692628   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:31:20.692693   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:31:20.703095   20917 logs.go:276] 2 containers: [65f86845d38b d51a449aa2c5]
	I0923 04:31:20.703114   20917 logs.go:123] Gathering logs for coredns [b6c7f17b7630] ...
	I0923 04:31:20.703119   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6c7f17b7630"
	I0923 04:31:20.715586   20917 logs.go:123] Gathering logs for kube-scheduler [3ba9dfb1baa7] ...
	I0923 04:31:20.715600   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba9dfb1baa7"
	I0923 04:31:20.728536   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:31:20.728552   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:31:20.740703   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:31:20.740718   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:31:20.816333   20917 logs.go:123] Gathering logs for etcd [a8024eb2d8d3] ...
	I0923 04:31:20.816349   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8024eb2d8d3"
	I0923 04:31:20.831072   20917 logs.go:123] Gathering logs for kube-proxy [23c22d92b606] ...
	I0923 04:31:20.831083   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23c22d92b606"
	I0923 04:31:20.842821   20917 logs.go:123] Gathering logs for kube-controller-manager [058b9aa76c09] ...
	I0923 04:31:20.842836   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 058b9aa76c09"
	I0923 04:31:20.859568   20917 logs.go:123] Gathering logs for kube-apiserver [cf19ba1df3cc] ...
	I0923 04:31:20.859579   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf19ba1df3cc"
	I0923 04:31:20.904047   20917 logs.go:123] Gathering logs for kube-scheduler [68ba318290a3] ...
	I0923 04:31:20.904057   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ba318290a3"
	I0923 04:31:20.915754   20917 logs.go:123] Gathering logs for etcd [7327067cf282] ...
	I0923 04:31:20.915765   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7327067cf282"
	I0923 04:31:20.934721   20917 logs.go:123] Gathering logs for coredns [a0c59653f70b] ...
	I0923 04:31:20.934732   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0c59653f70b"
	I0923 04:31:20.946626   20917 logs.go:123] Gathering logs for kube-proxy [b6032b1a7d80] ...
	I0923 04:31:20.946637   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6032b1a7d80"
	I0923 04:31:20.964880   20917 logs.go:123] Gathering logs for storage-provisioner [65f86845d38b] ...
	I0923 04:31:20.964890   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65f86845d38b"
	I0923 04:31:20.976505   20917 logs.go:123] Gathering logs for storage-provisioner [d51a449aa2c5] ...
	I0923 04:31:20.976515   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d51a449aa2c5"
	I0923 04:31:20.988209   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:31:20.988221   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:31:20.993096   20917 logs.go:123] Gathering logs for kube-apiserver [93e4d9301086] ...
	I0923 04:31:20.993105   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e4d9301086"
	I0923 04:31:21.008216   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:31:21.008229   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:31:21.034386   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:31:21.034395   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:31:21.072491   20917 logs.go:123] Gathering logs for kube-controller-manager [83303c6938f1] ...
	I0923 04:31:21.072499   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83303c6938f1"
	I0923 04:31:23.590202   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:31:28.592412   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:31:28.592582   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:31:28.607446   20917 logs.go:276] 2 containers: [93e4d9301086 cf19ba1df3cc]
	I0923 04:31:28.607556   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:31:28.619048   20917 logs.go:276] 2 containers: [a8024eb2d8d3 7327067cf282]
	I0923 04:31:28.619139   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:31:28.631116   20917 logs.go:276] 2 containers: [b6c7f17b7630 a0c59653f70b]
	I0923 04:31:28.631191   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:31:28.641591   20917 logs.go:276] 2 containers: [68ba318290a3 3ba9dfb1baa7]
	I0923 04:31:28.641680   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:31:28.653308   20917 logs.go:276] 2 containers: [b6032b1a7d80 23c22d92b606]
	I0923 04:31:28.653391   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:31:28.667848   20917 logs.go:276] 2 containers: [058b9aa76c09 83303c6938f1]
	I0923 04:31:28.667933   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:31:28.677896   20917 logs.go:276] 0 containers: []
	W0923 04:31:28.677909   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:31:28.677981   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:31:28.688748   20917 logs.go:276] 2 containers: [65f86845d38b d51a449aa2c5]
	I0923 04:31:28.688763   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:31:28.688768   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:31:28.729236   20917 logs.go:123] Gathering logs for kube-apiserver [93e4d9301086] ...
	I0923 04:31:28.729246   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e4d9301086"
	I0923 04:31:28.743604   20917 logs.go:123] Gathering logs for kube-apiserver [cf19ba1df3cc] ...
	I0923 04:31:28.743614   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf19ba1df3cc"
	I0923 04:31:28.785643   20917 logs.go:123] Gathering logs for kube-proxy [b6032b1a7d80] ...
	I0923 04:31:28.785655   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6032b1a7d80"
	I0923 04:31:28.797102   20917 logs.go:123] Gathering logs for storage-provisioner [d51a449aa2c5] ...
	I0923 04:31:28.797115   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d51a449aa2c5"
	I0923 04:31:28.808997   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:31:28.809008   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:31:28.845366   20917 logs.go:123] Gathering logs for kube-scheduler [68ba318290a3] ...
	I0923 04:31:28.845383   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ba318290a3"
	I0923 04:31:28.857160   20917 logs.go:123] Gathering logs for kube-proxy [23c22d92b606] ...
	I0923 04:31:28.857170   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23c22d92b606"
	I0923 04:31:28.869349   20917 logs.go:123] Gathering logs for storage-provisioner [65f86845d38b] ...
	I0923 04:31:28.869359   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65f86845d38b"
	I0923 04:31:28.881593   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:31:28.881609   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:31:28.886190   20917 logs.go:123] Gathering logs for etcd [7327067cf282] ...
	I0923 04:31:28.886198   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7327067cf282"
	I0923 04:31:28.900353   20917 logs.go:123] Gathering logs for coredns [a0c59653f70b] ...
	I0923 04:31:28.900363   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0c59653f70b"
	I0923 04:31:28.914509   20917 logs.go:123] Gathering logs for kube-scheduler [3ba9dfb1baa7] ...
	I0923 04:31:28.914520   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba9dfb1baa7"
	I0923 04:31:28.926973   20917 logs.go:123] Gathering logs for kube-controller-manager [83303c6938f1] ...
	I0923 04:31:28.926983   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83303c6938f1"
	I0923 04:31:28.938703   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:31:28.938714   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:31:28.964438   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:31:28.964445   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:31:28.976570   20917 logs.go:123] Gathering logs for etcd [a8024eb2d8d3] ...
	I0923 04:31:28.976580   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8024eb2d8d3"
	I0923 04:31:28.991028   20917 logs.go:123] Gathering logs for coredns [b6c7f17b7630] ...
	I0923 04:31:28.991039   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6c7f17b7630"
	I0923 04:31:29.002416   20917 logs.go:123] Gathering logs for kube-controller-manager [058b9aa76c09] ...
	I0923 04:31:29.002431   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 058b9aa76c09"
	I0923 04:31:31.521431   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:31:36.523811   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:31:36.524033   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:31:36.542766   20917 logs.go:276] 2 containers: [93e4d9301086 cf19ba1df3cc]
	I0923 04:31:36.542874   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:31:36.557390   20917 logs.go:276] 2 containers: [a8024eb2d8d3 7327067cf282]
	I0923 04:31:36.557481   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:31:36.571082   20917 logs.go:276] 2 containers: [b6c7f17b7630 a0c59653f70b]
	I0923 04:31:36.571168   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:31:36.581757   20917 logs.go:276] 2 containers: [68ba318290a3 3ba9dfb1baa7]
	I0923 04:31:36.581851   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:31:36.592708   20917 logs.go:276] 2 containers: [b6032b1a7d80 23c22d92b606]
	I0923 04:31:36.592794   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:31:36.603475   20917 logs.go:276] 2 containers: [058b9aa76c09 83303c6938f1]
	I0923 04:31:36.603557   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:31:36.614059   20917 logs.go:276] 0 containers: []
	W0923 04:31:36.614071   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:31:36.614139   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:31:36.624670   20917 logs.go:276] 2 containers: [65f86845d38b d51a449aa2c5]
	I0923 04:31:36.624688   20917 logs.go:123] Gathering logs for kube-controller-manager [83303c6938f1] ...
	I0923 04:31:36.624695   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83303c6938f1"
	I0923 04:31:36.635548   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:31:36.635558   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:31:36.662364   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:31:36.662373   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:31:36.674317   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:31:36.674333   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:31:36.715035   20917 logs.go:123] Gathering logs for kube-scheduler [68ba318290a3] ...
	I0923 04:31:36.715050   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ba318290a3"
	I0923 04:31:36.727482   20917 logs.go:123] Gathering logs for kube-proxy [23c22d92b606] ...
	I0923 04:31:36.727490   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23c22d92b606"
	I0923 04:31:36.739226   20917 logs.go:123] Gathering logs for coredns [a0c59653f70b] ...
	I0923 04:31:36.739241   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0c59653f70b"
	I0923 04:31:36.750193   20917 logs.go:123] Gathering logs for kube-scheduler [3ba9dfb1baa7] ...
	I0923 04:31:36.750204   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba9dfb1baa7"
	I0923 04:31:36.764247   20917 logs.go:123] Gathering logs for storage-provisioner [d51a449aa2c5] ...
	I0923 04:31:36.764258   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d51a449aa2c5"
	I0923 04:31:36.775334   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:31:36.775345   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:31:36.779646   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:31:36.779652   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:31:36.816912   20917 logs.go:123] Gathering logs for kube-apiserver [93e4d9301086] ...
	I0923 04:31:36.816925   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e4d9301086"
	I0923 04:31:36.831225   20917 logs.go:123] Gathering logs for kube-controller-manager [058b9aa76c09] ...
	I0923 04:31:36.831237   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 058b9aa76c09"
	I0923 04:31:36.847931   20917 logs.go:123] Gathering logs for etcd [a8024eb2d8d3] ...
	I0923 04:31:36.847941   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8024eb2d8d3"
	I0923 04:31:36.862001   20917 logs.go:123] Gathering logs for etcd [7327067cf282] ...
	I0923 04:31:36.862012   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7327067cf282"
	I0923 04:31:36.876406   20917 logs.go:123] Gathering logs for coredns [b6c7f17b7630] ...
	I0923 04:31:36.876418   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6c7f17b7630"
	I0923 04:31:36.887506   20917 logs.go:123] Gathering logs for kube-apiserver [cf19ba1df3cc] ...
	I0923 04:31:36.887517   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf19ba1df3cc"
	I0923 04:31:36.933148   20917 logs.go:123] Gathering logs for kube-proxy [b6032b1a7d80] ...
	I0923 04:31:36.933161   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6032b1a7d80"
	I0923 04:31:36.944875   20917 logs.go:123] Gathering logs for storage-provisioner [65f86845d38b] ...
	I0923 04:31:36.944884   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65f86845d38b"
	I0923 04:31:39.466692   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:31:44.467296   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:31:44.467553   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:31:44.495811   20917 logs.go:276] 2 containers: [93e4d9301086 cf19ba1df3cc]
	I0923 04:31:44.495966   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:31:44.513146   20917 logs.go:276] 2 containers: [a8024eb2d8d3 7327067cf282]
	I0923 04:31:44.513247   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:31:44.529508   20917 logs.go:276] 2 containers: [b6c7f17b7630 a0c59653f70b]
	I0923 04:31:44.529593   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:31:44.541338   20917 logs.go:276] 2 containers: [68ba318290a3 3ba9dfb1baa7]
	I0923 04:31:44.541418   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:31:44.551655   20917 logs.go:276] 2 containers: [b6032b1a7d80 23c22d92b606]
	I0923 04:31:44.551746   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:31:44.571205   20917 logs.go:276] 2 containers: [058b9aa76c09 83303c6938f1]
	I0923 04:31:44.571297   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:31:44.582984   20917 logs.go:276] 0 containers: []
	W0923 04:31:44.582998   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:31:44.583068   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:31:44.593619   20917 logs.go:276] 2 containers: [65f86845d38b d51a449aa2c5]
	I0923 04:31:44.593636   20917 logs.go:123] Gathering logs for kube-apiserver [93e4d9301086] ...
	I0923 04:31:44.593641   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e4d9301086"
	I0923 04:31:44.607874   20917 logs.go:123] Gathering logs for coredns [a0c59653f70b] ...
	I0923 04:31:44.607887   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0c59653f70b"
	I0923 04:31:44.619292   20917 logs.go:123] Gathering logs for kube-scheduler [68ba318290a3] ...
	I0923 04:31:44.619302   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ba318290a3"
	I0923 04:31:44.631190   20917 logs.go:123] Gathering logs for kube-proxy [23c22d92b606] ...
	I0923 04:31:44.631202   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23c22d92b606"
	I0923 04:31:44.643556   20917 logs.go:123] Gathering logs for storage-provisioner [d51a449aa2c5] ...
	I0923 04:31:44.643569   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d51a449aa2c5"
	I0923 04:31:44.655694   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:31:44.655705   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:31:44.660067   20917 logs.go:123] Gathering logs for etcd [7327067cf282] ...
	I0923 04:31:44.660075   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7327067cf282"
	I0923 04:31:44.674945   20917 logs.go:123] Gathering logs for kube-controller-manager [058b9aa76c09] ...
	I0923 04:31:44.674955   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 058b9aa76c09"
	I0923 04:31:44.692578   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:31:44.692588   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:31:44.704905   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:31:44.704921   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:31:44.745605   20917 logs.go:123] Gathering logs for kube-proxy [b6032b1a7d80] ...
	I0923 04:31:44.745616   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6032b1a7d80"
	I0923 04:31:44.761411   20917 logs.go:123] Gathering logs for kube-controller-manager [83303c6938f1] ...
	I0923 04:31:44.761423   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83303c6938f1"
	I0923 04:31:44.772905   20917 logs.go:123] Gathering logs for storage-provisioner [65f86845d38b] ...
	I0923 04:31:44.772915   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65f86845d38b"
	I0923 04:31:44.791173   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:31:44.791184   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:31:44.826548   20917 logs.go:123] Gathering logs for kube-apiserver [cf19ba1df3cc] ...
	I0923 04:31:44.826558   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf19ba1df3cc"
	I0923 04:31:44.870990   20917 logs.go:123] Gathering logs for etcd [a8024eb2d8d3] ...
	I0923 04:31:44.871010   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8024eb2d8d3"
	I0923 04:31:44.884883   20917 logs.go:123] Gathering logs for coredns [b6c7f17b7630] ...
	I0923 04:31:44.884899   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6c7f17b7630"
	I0923 04:31:44.896194   20917 logs.go:123] Gathering logs for kube-scheduler [3ba9dfb1baa7] ...
	I0923 04:31:44.896205   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba9dfb1baa7"
	I0923 04:31:44.908070   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:31:44.908081   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:31:47.435306   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:31:52.437677   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:31:52.437994   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:31:52.475784   20917 logs.go:276] 2 containers: [93e4d9301086 cf19ba1df3cc]
	I0923 04:31:52.475970   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:31:52.495654   20917 logs.go:276] 2 containers: [a8024eb2d8d3 7327067cf282]
	I0923 04:31:52.495769   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:31:52.510040   20917 logs.go:276] 2 containers: [b6c7f17b7630 a0c59653f70b]
	I0923 04:31:52.510133   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:31:52.522286   20917 logs.go:276] 2 containers: [68ba318290a3 3ba9dfb1baa7]
	I0923 04:31:52.522375   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:31:52.533509   20917 logs.go:276] 2 containers: [b6032b1a7d80 23c22d92b606]
	I0923 04:31:52.533593   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:31:52.544714   20917 logs.go:276] 2 containers: [058b9aa76c09 83303c6938f1]
	I0923 04:31:52.544798   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:31:52.555349   20917 logs.go:276] 0 containers: []
	W0923 04:31:52.555380   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:31:52.555459   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:31:52.566674   20917 logs.go:276] 2 containers: [65f86845d38b d51a449aa2c5]
	I0923 04:31:52.566690   20917 logs.go:123] Gathering logs for etcd [a8024eb2d8d3] ...
	I0923 04:31:52.566695   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8024eb2d8d3"
	I0923 04:31:52.580813   20917 logs.go:123] Gathering logs for etcd [7327067cf282] ...
	I0923 04:31:52.580828   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7327067cf282"
	I0923 04:31:52.595595   20917 logs.go:123] Gathering logs for coredns [a0c59653f70b] ...
	I0923 04:31:52.595606   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0c59653f70b"
	I0923 04:31:52.607994   20917 logs.go:123] Gathering logs for kube-scheduler [68ba318290a3] ...
	I0923 04:31:52.608008   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ba318290a3"
	I0923 04:31:52.622100   20917 logs.go:123] Gathering logs for kube-proxy [23c22d92b606] ...
	I0923 04:31:52.622114   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23c22d92b606"
	I0923 04:31:52.634337   20917 logs.go:123] Gathering logs for kube-controller-manager [058b9aa76c09] ...
	I0923 04:31:52.634348   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 058b9aa76c09"
	I0923 04:31:52.653069   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:31:52.653081   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:31:52.694270   20917 logs.go:123] Gathering logs for kube-apiserver [93e4d9301086] ...
	I0923 04:31:52.694281   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e4d9301086"
	I0923 04:31:52.709104   20917 logs.go:123] Gathering logs for kube-scheduler [3ba9dfb1baa7] ...
	I0923 04:31:52.709118   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba9dfb1baa7"
	I0923 04:31:52.721425   20917 logs.go:123] Gathering logs for kube-controller-manager [83303c6938f1] ...
	I0923 04:31:52.721442   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83303c6938f1"
	I0923 04:31:52.741043   20917 logs.go:123] Gathering logs for storage-provisioner [d51a449aa2c5] ...
	I0923 04:31:52.741056   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d51a449aa2c5"
	I0923 04:31:52.763754   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:31:52.763769   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:31:52.790612   20917 logs.go:123] Gathering logs for kube-apiserver [cf19ba1df3cc] ...
	I0923 04:31:52.790621   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf19ba1df3cc"
	I0923 04:31:52.830331   20917 logs.go:123] Gathering logs for coredns [b6c7f17b7630] ...
	I0923 04:31:52.830344   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6c7f17b7630"
	I0923 04:31:52.841873   20917 logs.go:123] Gathering logs for storage-provisioner [65f86845d38b] ...
	I0923 04:31:52.841885   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65f86845d38b"
	I0923 04:31:52.853583   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:31:52.853597   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:31:52.866109   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:31:52.866121   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:31:52.870798   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:31:52.870807   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:31:52.907186   20917 logs.go:123] Gathering logs for kube-proxy [b6032b1a7d80] ...
	I0923 04:31:52.907201   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6032b1a7d80"
	I0923 04:31:55.421409   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:32:00.423657   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:32:00.423874   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:32:00.444180   20917 logs.go:276] 2 containers: [93e4d9301086 cf19ba1df3cc]
	I0923 04:32:00.444300   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:32:00.468017   20917 logs.go:276] 2 containers: [a8024eb2d8d3 7327067cf282]
	I0923 04:32:00.468108   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:32:00.479361   20917 logs.go:276] 2 containers: [b6c7f17b7630 a0c59653f70b]
	I0923 04:32:00.479442   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:32:00.490991   20917 logs.go:276] 2 containers: [68ba318290a3 3ba9dfb1baa7]
	I0923 04:32:00.491075   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:32:00.501707   20917 logs.go:276] 2 containers: [b6032b1a7d80 23c22d92b606]
	I0923 04:32:00.501817   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:32:00.512196   20917 logs.go:276] 2 containers: [058b9aa76c09 83303c6938f1]
	I0923 04:32:00.512269   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:32:00.522682   20917 logs.go:276] 0 containers: []
	W0923 04:32:00.522695   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:32:00.522770   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:32:00.533671   20917 logs.go:276] 2 containers: [65f86845d38b d51a449aa2c5]
	I0923 04:32:00.533689   20917 logs.go:123] Gathering logs for coredns [b6c7f17b7630] ...
	I0923 04:32:00.533695   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6c7f17b7630"
	I0923 04:32:00.545374   20917 logs.go:123] Gathering logs for kube-scheduler [3ba9dfb1baa7] ...
	I0923 04:32:00.545384   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba9dfb1baa7"
	I0923 04:32:00.557560   20917 logs.go:123] Gathering logs for storage-provisioner [d51a449aa2c5] ...
	I0923 04:32:00.557570   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d51a449aa2c5"
	I0923 04:32:00.569207   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:32:00.569217   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:32:00.582637   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:32:00.582651   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:32:00.622622   20917 logs.go:123] Gathering logs for kube-apiserver [cf19ba1df3cc] ...
	I0923 04:32:00.622630   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf19ba1df3cc"
	I0923 04:32:00.662832   20917 logs.go:123] Gathering logs for etcd [7327067cf282] ...
	I0923 04:32:00.662842   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7327067cf282"
	I0923 04:32:00.677988   20917 logs.go:123] Gathering logs for coredns [a0c59653f70b] ...
	I0923 04:32:00.678002   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0c59653f70b"
	I0923 04:32:00.689385   20917 logs.go:123] Gathering logs for kube-scheduler [68ba318290a3] ...
	I0923 04:32:00.689396   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ba318290a3"
	I0923 04:32:00.701399   20917 logs.go:123] Gathering logs for kube-controller-manager [83303c6938f1] ...
	I0923 04:32:00.701409   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83303c6938f1"
	I0923 04:32:00.713272   20917 logs.go:123] Gathering logs for storage-provisioner [65f86845d38b] ...
	I0923 04:32:00.713283   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65f86845d38b"
	I0923 04:32:00.726210   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:32:00.726222   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:32:00.752168   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:32:00.752182   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:32:00.757067   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:32:00.757076   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:32:00.792497   20917 logs.go:123] Gathering logs for kube-controller-manager [058b9aa76c09] ...
	I0923 04:32:00.792507   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 058b9aa76c09"
	I0923 04:32:00.809892   20917 logs.go:123] Gathering logs for kube-proxy [23c22d92b606] ...
	I0923 04:32:00.809905   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23c22d92b606"
	I0923 04:32:00.822071   20917 logs.go:123] Gathering logs for kube-apiserver [93e4d9301086] ...
	I0923 04:32:00.822087   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e4d9301086"
	I0923 04:32:00.837249   20917 logs.go:123] Gathering logs for etcd [a8024eb2d8d3] ...
	I0923 04:32:00.837259   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8024eb2d8d3"
	I0923 04:32:00.851090   20917 logs.go:123] Gathering logs for kube-proxy [b6032b1a7d80] ...
	I0923 04:32:00.851100   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6032b1a7d80"
	I0923 04:32:03.363811   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:32:08.366220   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:32:08.366665   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:32:08.396135   20917 logs.go:276] 2 containers: [93e4d9301086 cf19ba1df3cc]
	I0923 04:32:08.396305   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:32:08.414581   20917 logs.go:276] 2 containers: [a8024eb2d8d3 7327067cf282]
	I0923 04:32:08.414700   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:32:08.428476   20917 logs.go:276] 2 containers: [b6c7f17b7630 a0c59653f70b]
	I0923 04:32:08.428571   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:32:08.440632   20917 logs.go:276] 2 containers: [68ba318290a3 3ba9dfb1baa7]
	I0923 04:32:08.440712   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:32:08.450743   20917 logs.go:276] 2 containers: [b6032b1a7d80 23c22d92b606]
	I0923 04:32:08.450830   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:32:08.461864   20917 logs.go:276] 2 containers: [058b9aa76c09 83303c6938f1]
	I0923 04:32:08.461954   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:32:08.472238   20917 logs.go:276] 0 containers: []
	W0923 04:32:08.472250   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:32:08.472327   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:32:08.483187   20917 logs.go:276] 2 containers: [65f86845d38b d51a449aa2c5]
	I0923 04:32:08.483202   20917 logs.go:123] Gathering logs for storage-provisioner [d51a449aa2c5] ...
	I0923 04:32:08.483209   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d51a449aa2c5"
	I0923 04:32:08.494361   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:32:08.494372   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:32:08.519571   20917 logs.go:123] Gathering logs for kube-proxy [b6032b1a7d80] ...
	I0923 04:32:08.519579   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6032b1a7d80"
	I0923 04:32:08.530889   20917 logs.go:123] Gathering logs for kube-proxy [23c22d92b606] ...
	I0923 04:32:08.530900   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23c22d92b606"
	I0923 04:32:08.542614   20917 logs.go:123] Gathering logs for kube-controller-manager [058b9aa76c09] ...
	I0923 04:32:08.542627   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 058b9aa76c09"
	I0923 04:32:08.565008   20917 logs.go:123] Gathering logs for kube-controller-manager [83303c6938f1] ...
	I0923 04:32:08.565023   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83303c6938f1"
	I0923 04:32:08.576308   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:32:08.576324   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:32:08.588402   20917 logs.go:123] Gathering logs for kube-apiserver [93e4d9301086] ...
	I0923 04:32:08.588417   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e4d9301086"
	I0923 04:32:08.601901   20917 logs.go:123] Gathering logs for etcd [7327067cf282] ...
	I0923 04:32:08.601914   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7327067cf282"
	I0923 04:32:08.616104   20917 logs.go:123] Gathering logs for kube-scheduler [68ba318290a3] ...
	I0923 04:32:08.616118   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ba318290a3"
	I0923 04:32:08.628042   20917 logs.go:123] Gathering logs for storage-provisioner [65f86845d38b] ...
	I0923 04:32:08.628056   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65f86845d38b"
	I0923 04:32:08.639972   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:32:08.639987   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:32:08.677670   20917 logs.go:123] Gathering logs for coredns [a0c59653f70b] ...
	I0923 04:32:08.677680   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0c59653f70b"
	I0923 04:32:08.689299   20917 logs.go:123] Gathering logs for kube-scheduler [3ba9dfb1baa7] ...
	I0923 04:32:08.689312   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba9dfb1baa7"
	I0923 04:32:08.701685   20917 logs.go:123] Gathering logs for etcd [a8024eb2d8d3] ...
	I0923 04:32:08.701700   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8024eb2d8d3"
	I0923 04:32:08.715395   20917 logs.go:123] Gathering logs for coredns [b6c7f17b7630] ...
	I0923 04:32:08.715409   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6c7f17b7630"
	I0923 04:32:08.726308   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:32:08.726322   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:32:08.767289   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:32:08.767300   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:32:08.771752   20917 logs.go:123] Gathering logs for kube-apiserver [cf19ba1df3cc] ...
	I0923 04:32:08.771758   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf19ba1df3cc"
	I0923 04:32:11.312900   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:32:16.315187   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:32:16.315403   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:32:16.329328   20917 logs.go:276] 2 containers: [93e4d9301086 cf19ba1df3cc]
	I0923 04:32:16.329425   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:32:16.339923   20917 logs.go:276] 2 containers: [a8024eb2d8d3 7327067cf282]
	I0923 04:32:16.340010   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:32:16.352070   20917 logs.go:276] 2 containers: [b6c7f17b7630 a0c59653f70b]
	I0923 04:32:16.352159   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:32:16.362226   20917 logs.go:276] 2 containers: [68ba318290a3 3ba9dfb1baa7]
	I0923 04:32:16.362315   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:32:16.372564   20917 logs.go:276] 2 containers: [b6032b1a7d80 23c22d92b606]
	I0923 04:32:16.372641   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:32:16.382765   20917 logs.go:276] 2 containers: [058b9aa76c09 83303c6938f1]
	I0923 04:32:16.382835   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:32:16.393084   20917 logs.go:276] 0 containers: []
	W0923 04:32:16.393095   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:32:16.393168   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:32:16.403495   20917 logs.go:276] 2 containers: [65f86845d38b d51a449aa2c5]
	I0923 04:32:16.403510   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:32:16.403515   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:32:16.443517   20917 logs.go:123] Gathering logs for kube-apiserver [cf19ba1df3cc] ...
	I0923 04:32:16.443526   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf19ba1df3cc"
	I0923 04:32:16.480734   20917 logs.go:123] Gathering logs for etcd [a8024eb2d8d3] ...
	I0923 04:32:16.480744   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8024eb2d8d3"
	I0923 04:32:16.494768   20917 logs.go:123] Gathering logs for coredns [b6c7f17b7630] ...
	I0923 04:32:16.494782   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6c7f17b7630"
	I0923 04:32:16.510597   20917 logs.go:123] Gathering logs for kube-scheduler [68ba318290a3] ...
	I0923 04:32:16.510611   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ba318290a3"
	I0923 04:32:16.522619   20917 logs.go:123] Gathering logs for storage-provisioner [d51a449aa2c5] ...
	I0923 04:32:16.522628   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d51a449aa2c5"
	I0923 04:32:16.534246   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:32:16.534255   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:32:16.538971   20917 logs.go:123] Gathering logs for kube-proxy [b6032b1a7d80] ...
	I0923 04:32:16.538980   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6032b1a7d80"
	I0923 04:32:16.551606   20917 logs.go:123] Gathering logs for kube-controller-manager [058b9aa76c09] ...
	I0923 04:32:16.551623   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 058b9aa76c09"
	I0923 04:32:16.569012   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:32:16.569025   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:32:16.595325   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:32:16.595334   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:32:16.629392   20917 logs.go:123] Gathering logs for kube-apiserver [93e4d9301086] ...
	I0923 04:32:16.629402   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e4d9301086"
	I0923 04:32:16.644234   20917 logs.go:123] Gathering logs for kube-proxy [23c22d92b606] ...
	I0923 04:32:16.644249   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23c22d92b606"
	I0923 04:32:16.655727   20917 logs.go:123] Gathering logs for kube-controller-manager [83303c6938f1] ...
	I0923 04:32:16.655736   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83303c6938f1"
	I0923 04:32:16.667779   20917 logs.go:123] Gathering logs for storage-provisioner [65f86845d38b] ...
	I0923 04:32:16.667795   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65f86845d38b"
	I0923 04:32:16.679650   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:32:16.679660   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:32:16.691507   20917 logs.go:123] Gathering logs for etcd [7327067cf282] ...
	I0923 04:32:16.691532   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7327067cf282"
	I0923 04:32:16.706032   20917 logs.go:123] Gathering logs for coredns [a0c59653f70b] ...
	I0923 04:32:16.706043   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0c59653f70b"
	I0923 04:32:16.717954   20917 logs.go:123] Gathering logs for kube-scheduler [3ba9dfb1baa7] ...
	I0923 04:32:16.717967   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba9dfb1baa7"
	I0923 04:32:19.232147   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:32:24.234437   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:32:24.234572   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:32:24.246208   20917 logs.go:276] 2 containers: [93e4d9301086 cf19ba1df3cc]
	I0923 04:32:24.246289   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:32:24.256970   20917 logs.go:276] 2 containers: [a8024eb2d8d3 7327067cf282]
	I0923 04:32:24.257055   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:32:24.267290   20917 logs.go:276] 2 containers: [b6c7f17b7630 a0c59653f70b]
	I0923 04:32:24.267380   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:32:24.278085   20917 logs.go:276] 2 containers: [68ba318290a3 3ba9dfb1baa7]
	I0923 04:32:24.278164   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:32:24.288900   20917 logs.go:276] 2 containers: [b6032b1a7d80 23c22d92b606]
	I0923 04:32:24.288985   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:32:24.299315   20917 logs.go:276] 2 containers: [058b9aa76c09 83303c6938f1]
	I0923 04:32:24.299399   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:32:24.310927   20917 logs.go:276] 0 containers: []
	W0923 04:32:24.310940   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:32:24.311008   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:32:24.321519   20917 logs.go:276] 2 containers: [65f86845d38b d51a449aa2c5]
	I0923 04:32:24.321535   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:32:24.321540   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:32:24.359580   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:32:24.359587   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:32:24.363776   20917 logs.go:123] Gathering logs for coredns [a0c59653f70b] ...
	I0923 04:32:24.363784   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0c59653f70b"
	I0923 04:32:24.374950   20917 logs.go:123] Gathering logs for kube-proxy [b6032b1a7d80] ...
	I0923 04:32:24.374961   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6032b1a7d80"
	I0923 04:32:24.388615   20917 logs.go:123] Gathering logs for kube-proxy [23c22d92b606] ...
	I0923 04:32:24.388626   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23c22d92b606"
	I0923 04:32:24.400476   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:32:24.400486   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:32:24.412678   20917 logs.go:123] Gathering logs for etcd [a8024eb2d8d3] ...
	I0923 04:32:24.412688   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8024eb2d8d3"
	I0923 04:32:24.426235   20917 logs.go:123] Gathering logs for storage-provisioner [d51a449aa2c5] ...
	I0923 04:32:24.426247   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d51a449aa2c5"
	I0923 04:32:24.438060   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:32:24.438074   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:32:24.472527   20917 logs.go:123] Gathering logs for kube-apiserver [93e4d9301086] ...
	I0923 04:32:24.472537   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e4d9301086"
	I0923 04:32:24.486676   20917 logs.go:123] Gathering logs for etcd [7327067cf282] ...
	I0923 04:32:24.486689   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7327067cf282"
	I0923 04:32:24.501431   20917 logs.go:123] Gathering logs for coredns [b6c7f17b7630] ...
	I0923 04:32:24.501441   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6c7f17b7630"
	I0923 04:32:24.512579   20917 logs.go:123] Gathering logs for kube-scheduler [68ba318290a3] ...
	I0923 04:32:24.512590   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ba318290a3"
	I0923 04:32:24.524156   20917 logs.go:123] Gathering logs for kube-scheduler [3ba9dfb1baa7] ...
	I0923 04:32:24.524167   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba9dfb1baa7"
	I0923 04:32:24.536278   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:32:24.536287   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:32:24.561739   20917 logs.go:123] Gathering logs for kube-apiserver [cf19ba1df3cc] ...
	I0923 04:32:24.561746   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf19ba1df3cc"
	I0923 04:32:24.600800   20917 logs.go:123] Gathering logs for kube-controller-manager [058b9aa76c09] ...
	I0923 04:32:24.600811   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 058b9aa76c09"
	I0923 04:32:24.617410   20917 logs.go:123] Gathering logs for kube-controller-manager [83303c6938f1] ...
	I0923 04:32:24.617424   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83303c6938f1"
	I0923 04:32:24.635670   20917 logs.go:123] Gathering logs for storage-provisioner [65f86845d38b] ...
	I0923 04:32:24.635684   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65f86845d38b"
	I0923 04:32:27.149223   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:32:32.151842   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:32:32.152221   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:32:32.185921   20917 logs.go:276] 2 containers: [93e4d9301086 cf19ba1df3cc]
	I0923 04:32:32.186070   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:32:32.205501   20917 logs.go:276] 2 containers: [a8024eb2d8d3 7327067cf282]
	I0923 04:32:32.205620   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:32:32.223974   20917 logs.go:276] 2 containers: [b6c7f17b7630 a0c59653f70b]
	I0923 04:32:32.224065   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:32:32.235930   20917 logs.go:276] 2 containers: [68ba318290a3 3ba9dfb1baa7]
	I0923 04:32:32.236012   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:32:32.246888   20917 logs.go:276] 2 containers: [b6032b1a7d80 23c22d92b606]
	I0923 04:32:32.246970   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:32:32.258306   20917 logs.go:276] 2 containers: [058b9aa76c09 83303c6938f1]
	I0923 04:32:32.258396   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:32:32.269220   20917 logs.go:276] 0 containers: []
	W0923 04:32:32.269232   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:32:32.269308   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:32:32.279790   20917 logs.go:276] 2 containers: [65f86845d38b d51a449aa2c5]
	I0923 04:32:32.279808   20917 logs.go:123] Gathering logs for kube-controller-manager [83303c6938f1] ...
	I0923 04:32:32.279813   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83303c6938f1"
	I0923 04:32:32.291770   20917 logs.go:123] Gathering logs for storage-provisioner [d51a449aa2c5] ...
	I0923 04:32:32.291781   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d51a449aa2c5"
	I0923 04:32:32.303609   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:32:32.303620   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:32:32.341155   20917 logs.go:123] Gathering logs for kube-apiserver [93e4d9301086] ...
	I0923 04:32:32.341166   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e4d9301086"
	I0923 04:32:32.354975   20917 logs.go:123] Gathering logs for kube-scheduler [68ba318290a3] ...
	I0923 04:32:32.354985   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ba318290a3"
	I0923 04:32:32.366972   20917 logs.go:123] Gathering logs for kube-proxy [23c22d92b606] ...
	I0923 04:32:32.366983   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23c22d92b606"
	I0923 04:32:32.379058   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:32:32.379068   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:32:32.383624   20917 logs.go:123] Gathering logs for coredns [b6c7f17b7630] ...
	I0923 04:32:32.383631   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6c7f17b7630"
	I0923 04:32:32.394977   20917 logs.go:123] Gathering logs for coredns [a0c59653f70b] ...
	I0923 04:32:32.394989   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0c59653f70b"
	I0923 04:32:32.406718   20917 logs.go:123] Gathering logs for kube-proxy [b6032b1a7d80] ...
	I0923 04:32:32.406730   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6032b1a7d80"
	I0923 04:32:32.418878   20917 logs.go:123] Gathering logs for storage-provisioner [65f86845d38b] ...
	I0923 04:32:32.418888   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65f86845d38b"
	I0923 04:32:32.430747   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:32:32.430759   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:32:32.442870   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:32:32.442883   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:32:32.482248   20917 logs.go:123] Gathering logs for kube-apiserver [cf19ba1df3cc] ...
	I0923 04:32:32.482256   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf19ba1df3cc"
	I0923 04:32:32.519946   20917 logs.go:123] Gathering logs for etcd [a8024eb2d8d3] ...
	I0923 04:32:32.519956   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8024eb2d8d3"
	I0923 04:32:32.533889   20917 logs.go:123] Gathering logs for etcd [7327067cf282] ...
	I0923 04:32:32.533899   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7327067cf282"
	I0923 04:32:32.548320   20917 logs.go:123] Gathering logs for kube-scheduler [3ba9dfb1baa7] ...
	I0923 04:32:32.548329   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba9dfb1baa7"
	I0923 04:32:32.560147   20917 logs.go:123] Gathering logs for kube-controller-manager [058b9aa76c09] ...
	I0923 04:32:32.560156   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 058b9aa76c09"
	I0923 04:32:32.581398   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:32:32.581409   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:32:35.108886   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:32:40.109669   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:32:40.109843   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:32:40.121444   20917 logs.go:276] 2 containers: [93e4d9301086 cf19ba1df3cc]
	I0923 04:32:40.121538   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:32:40.132993   20917 logs.go:276] 2 containers: [a8024eb2d8d3 7327067cf282]
	I0923 04:32:40.133082   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:32:40.143732   20917 logs.go:276] 2 containers: [b6c7f17b7630 a0c59653f70b]
	I0923 04:32:40.143811   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:32:40.171713   20917 logs.go:276] 2 containers: [68ba318290a3 3ba9dfb1baa7]
	I0923 04:32:40.171801   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:32:40.183135   20917 logs.go:276] 2 containers: [b6032b1a7d80 23c22d92b606]
	I0923 04:32:40.183220   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:32:40.193680   20917 logs.go:276] 2 containers: [058b9aa76c09 83303c6938f1]
	I0923 04:32:40.193761   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:32:40.203955   20917 logs.go:276] 0 containers: []
	W0923 04:32:40.203966   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:32:40.204033   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:32:40.214654   20917 logs.go:276] 2 containers: [65f86845d38b d51a449aa2c5]
	I0923 04:32:40.214673   20917 logs.go:123] Gathering logs for coredns [b6c7f17b7630] ...
	I0923 04:32:40.214679   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6c7f17b7630"
	I0923 04:32:40.226341   20917 logs.go:123] Gathering logs for kube-scheduler [68ba318290a3] ...
	I0923 04:32:40.226352   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ba318290a3"
	I0923 04:32:40.238122   20917 logs.go:123] Gathering logs for kube-controller-manager [83303c6938f1] ...
	I0923 04:32:40.238134   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83303c6938f1"
	I0923 04:32:40.249753   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:32:40.249764   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:32:40.286571   20917 logs.go:123] Gathering logs for etcd [a8024eb2d8d3] ...
	I0923 04:32:40.286581   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8024eb2d8d3"
	I0923 04:32:40.300978   20917 logs.go:123] Gathering logs for kube-proxy [b6032b1a7d80] ...
	I0923 04:32:40.300992   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6032b1a7d80"
	I0923 04:32:40.320156   20917 logs.go:123] Gathering logs for kube-proxy [23c22d92b606] ...
	I0923 04:32:40.320166   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23c22d92b606"
	I0923 04:32:40.333435   20917 logs.go:123] Gathering logs for kube-controller-manager [058b9aa76c09] ...
	I0923 04:32:40.333445   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 058b9aa76c09"
	I0923 04:32:40.350603   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:32:40.350618   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:32:40.362616   20917 logs.go:123] Gathering logs for kube-apiserver [93e4d9301086] ...
	I0923 04:32:40.362629   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e4d9301086"
	I0923 04:32:40.386162   20917 logs.go:123] Gathering logs for coredns [a0c59653f70b] ...
	I0923 04:32:40.386172   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0c59653f70b"
	I0923 04:32:40.397953   20917 logs.go:123] Gathering logs for storage-provisioner [d51a449aa2c5] ...
	I0923 04:32:40.397963   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d51a449aa2c5"
	I0923 04:32:40.409007   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:32:40.409019   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:32:40.432993   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:32:40.433004   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:32:40.437382   20917 logs.go:123] Gathering logs for storage-provisioner [65f86845d38b] ...
	I0923 04:32:40.437390   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65f86845d38b"
	I0923 04:32:40.448842   20917 logs.go:123] Gathering logs for etcd [7327067cf282] ...
	I0923 04:32:40.448853   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7327067cf282"
	I0923 04:32:40.464689   20917 logs.go:123] Gathering logs for kube-scheduler [3ba9dfb1baa7] ...
	I0923 04:32:40.464699   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba9dfb1baa7"
	I0923 04:32:40.476872   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:32:40.476882   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:32:40.517185   20917 logs.go:123] Gathering logs for kube-apiserver [cf19ba1df3cc] ...
	I0923 04:32:40.517194   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf19ba1df3cc"
	I0923 04:32:43.057108   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:32:48.059448   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:32:48.059678   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:32:48.088020   20917 logs.go:276] 2 containers: [93e4d9301086 cf19ba1df3cc]
	I0923 04:32:48.088153   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:32:48.104744   20917 logs.go:276] 2 containers: [a8024eb2d8d3 7327067cf282]
	I0923 04:32:48.104838   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:32:48.117356   20917 logs.go:276] 2 containers: [b6c7f17b7630 a0c59653f70b]
	I0923 04:32:48.117443   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:32:48.130321   20917 logs.go:276] 2 containers: [68ba318290a3 3ba9dfb1baa7]
	I0923 04:32:48.130406   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:32:48.140933   20917 logs.go:276] 2 containers: [b6032b1a7d80 23c22d92b606]
	I0923 04:32:48.141016   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:32:48.152252   20917 logs.go:276] 2 containers: [058b9aa76c09 83303c6938f1]
	I0923 04:32:48.152336   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:32:48.164699   20917 logs.go:276] 0 containers: []
	W0923 04:32:48.164711   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:32:48.164774   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:32:48.175934   20917 logs.go:276] 2 containers: [65f86845d38b d51a449aa2c5]
	I0923 04:32:48.175951   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:32:48.175958   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:32:48.217031   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:32:48.217048   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:32:48.258061   20917 logs.go:123] Gathering logs for coredns [a0c59653f70b] ...
	I0923 04:32:48.258077   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0c59653f70b"
	I0923 04:32:48.269441   20917 logs.go:123] Gathering logs for kube-scheduler [68ba318290a3] ...
	I0923 04:32:48.269454   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ba318290a3"
	I0923 04:32:48.280889   20917 logs.go:123] Gathering logs for kube-scheduler [3ba9dfb1baa7] ...
	I0923 04:32:48.280903   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba9dfb1baa7"
	I0923 04:32:48.293231   20917 logs.go:123] Gathering logs for kube-controller-manager [058b9aa76c09] ...
	I0923 04:32:48.293244   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 058b9aa76c09"
	I0923 04:32:48.310851   20917 logs.go:123] Gathering logs for storage-provisioner [d51a449aa2c5] ...
	I0923 04:32:48.310866   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d51a449aa2c5"
	I0923 04:32:48.322692   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:32:48.322705   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:32:48.327150   20917 logs.go:123] Gathering logs for etcd [7327067cf282] ...
	I0923 04:32:48.327156   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7327067cf282"
	I0923 04:32:48.341670   20917 logs.go:123] Gathering logs for coredns [b6c7f17b7630] ...
	I0923 04:32:48.341684   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6c7f17b7630"
	I0923 04:32:48.352968   20917 logs.go:123] Gathering logs for kube-proxy [b6032b1a7d80] ...
	I0923 04:32:48.352982   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6032b1a7d80"
	I0923 04:32:48.368217   20917 logs.go:123] Gathering logs for kube-apiserver [93e4d9301086] ...
	I0923 04:32:48.368227   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e4d9301086"
	I0923 04:32:48.386109   20917 logs.go:123] Gathering logs for etcd [a8024eb2d8d3] ...
	I0923 04:32:48.386120   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8024eb2d8d3"
	I0923 04:32:48.400268   20917 logs.go:123] Gathering logs for kube-proxy [23c22d92b606] ...
	I0923 04:32:48.400277   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23c22d92b606"
	I0923 04:32:48.412153   20917 logs.go:123] Gathering logs for storage-provisioner [65f86845d38b] ...
	I0923 04:32:48.412168   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65f86845d38b"
	I0923 04:32:48.423973   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:32:48.423987   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:32:48.449047   20917 logs.go:123] Gathering logs for kube-apiserver [cf19ba1df3cc] ...
	I0923 04:32:48.449056   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf19ba1df3cc"
	I0923 04:32:48.488675   20917 logs.go:123] Gathering logs for kube-controller-manager [83303c6938f1] ...
	I0923 04:32:48.488686   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83303c6938f1"
	I0923 04:32:48.500135   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:32:48.500150   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:32:51.015315   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:32:56.016584   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:32:56.016883   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:32:56.039685   20917 logs.go:276] 2 containers: [93e4d9301086 cf19ba1df3cc]
	I0923 04:32:56.039850   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:32:56.054932   20917 logs.go:276] 2 containers: [a8024eb2d8d3 7327067cf282]
	I0923 04:32:56.055016   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:32:56.067362   20917 logs.go:276] 2 containers: [b6c7f17b7630 a0c59653f70b]
	I0923 04:32:56.067446   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:32:56.078844   20917 logs.go:276] 2 containers: [68ba318290a3 3ba9dfb1baa7]
	I0923 04:32:56.078926   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:32:56.089507   20917 logs.go:276] 2 containers: [b6032b1a7d80 23c22d92b606]
	I0923 04:32:56.089583   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:32:56.114830   20917 logs.go:276] 2 containers: [058b9aa76c09 83303c6938f1]
	I0923 04:32:56.114902   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:32:56.124969   20917 logs.go:276] 0 containers: []
	W0923 04:32:56.124980   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:32:56.125047   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:32:56.136725   20917 logs.go:276] 2 containers: [65f86845d38b d51a449aa2c5]
	I0923 04:32:56.136743   20917 logs.go:123] Gathering logs for storage-provisioner [65f86845d38b] ...
	I0923 04:32:56.136748   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65f86845d38b"
	I0923 04:32:56.148204   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:32:56.148215   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:32:56.152646   20917 logs.go:123] Gathering logs for coredns [b6c7f17b7630] ...
	I0923 04:32:56.152654   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6c7f17b7630"
	I0923 04:32:56.165601   20917 logs.go:123] Gathering logs for kube-scheduler [3ba9dfb1baa7] ...
	I0923 04:32:56.165612   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba9dfb1baa7"
	I0923 04:32:56.177821   20917 logs.go:123] Gathering logs for kube-proxy [b6032b1a7d80] ...
	I0923 04:32:56.177832   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6032b1a7d80"
	I0923 04:32:56.189941   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:32:56.189955   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:32:56.215190   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:32:56.215201   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:32:56.227065   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:32:56.227080   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:32:56.264833   20917 logs.go:123] Gathering logs for kube-apiserver [cf19ba1df3cc] ...
	I0923 04:32:56.264841   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf19ba1df3cc"
	I0923 04:32:56.302322   20917 logs.go:123] Gathering logs for etcd [a8024eb2d8d3] ...
	I0923 04:32:56.302333   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8024eb2d8d3"
	I0923 04:32:56.320034   20917 logs.go:123] Gathering logs for kube-controller-manager [058b9aa76c09] ...
	I0923 04:32:56.320044   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 058b9aa76c09"
	I0923 04:32:56.336984   20917 logs.go:123] Gathering logs for coredns [a0c59653f70b] ...
	I0923 04:32:56.336996   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0c59653f70b"
	I0923 04:32:56.348323   20917 logs.go:123] Gathering logs for kube-scheduler [68ba318290a3] ...
	I0923 04:32:56.348336   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ba318290a3"
	I0923 04:32:56.360790   20917 logs.go:123] Gathering logs for kube-controller-manager [83303c6938f1] ...
	I0923 04:32:56.360805   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83303c6938f1"
	I0923 04:32:56.372172   20917 logs.go:123] Gathering logs for storage-provisioner [d51a449aa2c5] ...
	I0923 04:32:56.372183   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d51a449aa2c5"
	I0923 04:32:56.384477   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:32:56.384488   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:32:56.420458   20917 logs.go:123] Gathering logs for kube-apiserver [93e4d9301086] ...
	I0923 04:32:56.420469   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e4d9301086"
	I0923 04:32:56.434150   20917 logs.go:123] Gathering logs for etcd [7327067cf282] ...
	I0923 04:32:56.434162   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7327067cf282"
	I0923 04:32:56.448988   20917 logs.go:123] Gathering logs for kube-proxy [23c22d92b606] ...
	I0923 04:32:56.448998   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23c22d92b606"
	I0923 04:32:58.963000   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:33:03.965711   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:33:03.965911   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:33:03.979848   20917 logs.go:276] 2 containers: [93e4d9301086 cf19ba1df3cc]
	I0923 04:33:03.979943   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:33:03.991100   20917 logs.go:276] 2 containers: [a8024eb2d8d3 7327067cf282]
	I0923 04:33:03.991187   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:33:04.002506   20917 logs.go:276] 2 containers: [b6c7f17b7630 a0c59653f70b]
	I0923 04:33:04.002592   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:33:04.016817   20917 logs.go:276] 2 containers: [68ba318290a3 3ba9dfb1baa7]
	I0923 04:33:04.016903   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:33:04.026957   20917 logs.go:276] 2 containers: [b6032b1a7d80 23c22d92b606]
	I0923 04:33:04.027037   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:33:04.037364   20917 logs.go:276] 2 containers: [058b9aa76c09 83303c6938f1]
	I0923 04:33:04.037443   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:33:04.047536   20917 logs.go:276] 0 containers: []
	W0923 04:33:04.047548   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:33:04.047610   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:33:04.058112   20917 logs.go:276] 2 containers: [65f86845d38b d51a449aa2c5]
	I0923 04:33:04.058125   20917 logs.go:123] Gathering logs for etcd [a8024eb2d8d3] ...
	I0923 04:33:04.058130   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8024eb2d8d3"
	I0923 04:33:04.072092   20917 logs.go:123] Gathering logs for kube-proxy [23c22d92b606] ...
	I0923 04:33:04.072105   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23c22d92b606"
	I0923 04:33:04.088015   20917 logs.go:123] Gathering logs for kube-controller-manager [058b9aa76c09] ...
	I0923 04:33:04.088026   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 058b9aa76c09"
	I0923 04:33:04.104843   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:33:04.104858   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:33:04.129883   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:33:04.129893   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:33:04.143108   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:33:04.143124   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:33:04.183401   20917 logs.go:123] Gathering logs for etcd [7327067cf282] ...
	I0923 04:33:04.183411   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7327067cf282"
	I0923 04:33:04.201863   20917 logs.go:123] Gathering logs for kube-scheduler [68ba318290a3] ...
	I0923 04:33:04.201873   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ba318290a3"
	I0923 04:33:04.214569   20917 logs.go:123] Gathering logs for kube-controller-manager [83303c6938f1] ...
	I0923 04:33:04.214584   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83303c6938f1"
	I0923 04:33:04.228097   20917 logs.go:123] Gathering logs for storage-provisioner [d51a449aa2c5] ...
	I0923 04:33:04.228113   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d51a449aa2c5"
	I0923 04:33:04.241863   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:33:04.241876   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:33:04.246332   20917 logs.go:123] Gathering logs for kube-apiserver [cf19ba1df3cc] ...
	I0923 04:33:04.246341   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf19ba1df3cc"
	I0923 04:33:04.284143   20917 logs.go:123] Gathering logs for coredns [a0c59653f70b] ...
	I0923 04:33:04.284154   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0c59653f70b"
	I0923 04:33:04.308542   20917 logs.go:123] Gathering logs for kube-scheduler [3ba9dfb1baa7] ...
	I0923 04:33:04.308557   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba9dfb1baa7"
	I0923 04:33:04.324415   20917 logs.go:123] Gathering logs for kube-proxy [b6032b1a7d80] ...
	I0923 04:33:04.324428   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6032b1a7d80"
	I0923 04:33:04.336234   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:33:04.336243   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:33:04.371115   20917 logs.go:123] Gathering logs for kube-apiserver [93e4d9301086] ...
	I0923 04:33:04.371130   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e4d9301086"
	I0923 04:33:04.385003   20917 logs.go:123] Gathering logs for coredns [b6c7f17b7630] ...
	I0923 04:33:04.385013   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6c7f17b7630"
	I0923 04:33:04.395745   20917 logs.go:123] Gathering logs for storage-provisioner [65f86845d38b] ...
	I0923 04:33:04.395758   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65f86845d38b"
	I0923 04:33:06.909278   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:33:11.911985   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:33:11.912413   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:33:11.945559   20917 logs.go:276] 2 containers: [93e4d9301086 cf19ba1df3cc]
	I0923 04:33:11.945713   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:33:11.965814   20917 logs.go:276] 2 containers: [a8024eb2d8d3 7327067cf282]
	I0923 04:33:11.965923   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:33:11.980026   20917 logs.go:276] 2 containers: [b6c7f17b7630 a0c59653f70b]
	I0923 04:33:11.980126   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:33:11.992635   20917 logs.go:276] 2 containers: [68ba318290a3 3ba9dfb1baa7]
	I0923 04:33:11.992718   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:33:12.003319   20917 logs.go:276] 2 containers: [b6032b1a7d80 23c22d92b606]
	I0923 04:33:12.003394   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:33:12.017798   20917 logs.go:276] 2 containers: [058b9aa76c09 83303c6938f1]
	I0923 04:33:12.017886   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:33:12.028444   20917 logs.go:276] 0 containers: []
	W0923 04:33:12.028456   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:33:12.028532   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:33:12.039362   20917 logs.go:276] 2 containers: [65f86845d38b d51a449aa2c5]
	I0923 04:33:12.039377   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:33:12.039382   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:33:12.078672   20917 logs.go:123] Gathering logs for kube-apiserver [cf19ba1df3cc] ...
	I0923 04:33:12.078682   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf19ba1df3cc"
	I0923 04:33:12.116724   20917 logs.go:123] Gathering logs for etcd [a8024eb2d8d3] ...
	I0923 04:33:12.116738   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8024eb2d8d3"
	I0923 04:33:12.134633   20917 logs.go:123] Gathering logs for kube-scheduler [68ba318290a3] ...
	I0923 04:33:12.134643   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ba318290a3"
	I0923 04:33:12.151030   20917 logs.go:123] Gathering logs for kube-proxy [23c22d92b606] ...
	I0923 04:33:12.151043   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23c22d92b606"
	I0923 04:33:12.165630   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:33:12.165640   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:33:12.190861   20917 logs.go:123] Gathering logs for kube-proxy [b6032b1a7d80] ...
	I0923 04:33:12.190868   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6032b1a7d80"
	I0923 04:33:12.202665   20917 logs.go:123] Gathering logs for kube-controller-manager [83303c6938f1] ...
	I0923 04:33:12.202676   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83303c6938f1"
	I0923 04:33:12.214430   20917 logs.go:123] Gathering logs for storage-provisioner [d51a449aa2c5] ...
	I0923 04:33:12.214440   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d51a449aa2c5"
	I0923 04:33:12.229853   20917 logs.go:123] Gathering logs for kube-apiserver [93e4d9301086] ...
	I0923 04:33:12.229868   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e4d9301086"
	I0923 04:33:12.248687   20917 logs.go:123] Gathering logs for etcd [7327067cf282] ...
	I0923 04:33:12.248701   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7327067cf282"
	I0923 04:33:12.264142   20917 logs.go:123] Gathering logs for coredns [b6c7f17b7630] ...
	I0923 04:33:12.264153   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6c7f17b7630"
	I0923 04:33:12.285520   20917 logs.go:123] Gathering logs for kube-controller-manager [058b9aa76c09] ...
	I0923 04:33:12.285532   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 058b9aa76c09"
	I0923 04:33:12.303341   20917 logs.go:123] Gathering logs for storage-provisioner [65f86845d38b] ...
	I0923 04:33:12.303356   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65f86845d38b"
	I0923 04:33:12.316508   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:33:12.316522   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:33:12.321108   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:33:12.321118   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:33:12.357203   20917 logs.go:123] Gathering logs for coredns [a0c59653f70b] ...
	I0923 04:33:12.357215   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0c59653f70b"
	I0923 04:33:12.373202   20917 logs.go:123] Gathering logs for kube-scheduler [3ba9dfb1baa7] ...
	I0923 04:33:12.373214   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba9dfb1baa7"
	I0923 04:33:12.386624   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:33:12.386640   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:33:14.900953   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:33:19.903617   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:33:19.903857   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:33:19.923336   20917 logs.go:276] 2 containers: [93e4d9301086 cf19ba1df3cc]
	I0923 04:33:19.923442   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:33:19.936767   20917 logs.go:276] 2 containers: [a8024eb2d8d3 7327067cf282]
	I0923 04:33:19.936858   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:33:19.951637   20917 logs.go:276] 2 containers: [b6c7f17b7630 a0c59653f70b]
	I0923 04:33:19.951719   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:33:19.962346   20917 logs.go:276] 2 containers: [68ba318290a3 3ba9dfb1baa7]
	I0923 04:33:19.962423   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:33:19.973264   20917 logs.go:276] 2 containers: [b6032b1a7d80 23c22d92b606]
	I0923 04:33:19.973350   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:33:19.984912   20917 logs.go:276] 2 containers: [058b9aa76c09 83303c6938f1]
	I0923 04:33:19.984996   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:33:19.994827   20917 logs.go:276] 0 containers: []
	W0923 04:33:19.994844   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:33:19.994921   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:33:20.006317   20917 logs.go:276] 2 containers: [65f86845d38b d51a449aa2c5]
	I0923 04:33:20.006335   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:33:20.006342   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:33:20.046589   20917 logs.go:123] Gathering logs for kube-apiserver [cf19ba1df3cc] ...
	I0923 04:33:20.046602   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf19ba1df3cc"
	I0923 04:33:20.084961   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:33:20.084973   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:33:20.124157   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:33:20.124168   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:33:20.128390   20917 logs.go:123] Gathering logs for kube-apiserver [93e4d9301086] ...
	I0923 04:33:20.128396   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e4d9301086"
	I0923 04:33:20.141958   20917 logs.go:123] Gathering logs for etcd [a8024eb2d8d3] ...
	I0923 04:33:20.141972   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8024eb2d8d3"
	I0923 04:33:20.156571   20917 logs.go:123] Gathering logs for coredns [b6c7f17b7630] ...
	I0923 04:33:20.156585   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6c7f17b7630"
	I0923 04:33:20.168637   20917 logs.go:123] Gathering logs for kube-scheduler [3ba9dfb1baa7] ...
	I0923 04:33:20.168652   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba9dfb1baa7"
	I0923 04:33:20.181223   20917 logs.go:123] Gathering logs for kube-controller-manager [83303c6938f1] ...
	I0923 04:33:20.181232   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83303c6938f1"
	I0923 04:33:20.192786   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:33:20.192797   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:33:20.217202   20917 logs.go:123] Gathering logs for etcd [7327067cf282] ...
	I0923 04:33:20.217209   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7327067cf282"
	I0923 04:33:20.232534   20917 logs.go:123] Gathering logs for coredns [a0c59653f70b] ...
	I0923 04:33:20.232544   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0c59653f70b"
	I0923 04:33:20.244076   20917 logs.go:123] Gathering logs for kube-proxy [23c22d92b606] ...
	I0923 04:33:20.244088   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23c22d92b606"
	I0923 04:33:20.255809   20917 logs.go:123] Gathering logs for storage-provisioner [d51a449aa2c5] ...
	I0923 04:33:20.255819   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d51a449aa2c5"
	I0923 04:33:20.268643   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:33:20.268652   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:33:20.284074   20917 logs.go:123] Gathering logs for kube-scheduler [68ba318290a3] ...
	I0923 04:33:20.284084   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ba318290a3"
	I0923 04:33:20.300221   20917 logs.go:123] Gathering logs for kube-proxy [b6032b1a7d80] ...
	I0923 04:33:20.300232   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6032b1a7d80"
	I0923 04:33:20.312171   20917 logs.go:123] Gathering logs for kube-controller-manager [058b9aa76c09] ...
	I0923 04:33:20.312184   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 058b9aa76c09"
	I0923 04:33:20.331262   20917 logs.go:123] Gathering logs for storage-provisioner [65f86845d38b] ...
	I0923 04:33:20.331275   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65f86845d38b"
	I0923 04:33:22.845401   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:33:27.847729   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:33:27.847932   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:33:27.863966   20917 logs.go:276] 2 containers: [93e4d9301086 cf19ba1df3cc]
	I0923 04:33:27.864069   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:33:27.881512   20917 logs.go:276] 2 containers: [a8024eb2d8d3 7327067cf282]
	I0923 04:33:27.881597   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:33:27.892761   20917 logs.go:276] 2 containers: [b6c7f17b7630 a0c59653f70b]
	I0923 04:33:27.892848   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:33:27.903590   20917 logs.go:276] 2 containers: [68ba318290a3 3ba9dfb1baa7]
	I0923 04:33:27.903670   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:33:27.914511   20917 logs.go:276] 2 containers: [b6032b1a7d80 23c22d92b606]
	I0923 04:33:27.914598   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:33:27.925024   20917 logs.go:276] 2 containers: [058b9aa76c09 83303c6938f1]
	I0923 04:33:27.925104   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:33:27.935601   20917 logs.go:276] 0 containers: []
	W0923 04:33:27.935611   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:33:27.935689   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:33:27.946763   20917 logs.go:276] 2 containers: [65f86845d38b d51a449aa2c5]
	I0923 04:33:27.946782   20917 logs.go:123] Gathering logs for etcd [7327067cf282] ...
	I0923 04:33:27.946788   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7327067cf282"
	I0923 04:33:27.961756   20917 logs.go:123] Gathering logs for storage-provisioner [65f86845d38b] ...
	I0923 04:33:27.961768   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65f86845d38b"
	I0923 04:33:27.972998   20917 logs.go:123] Gathering logs for storage-provisioner [d51a449aa2c5] ...
	I0923 04:33:27.973014   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d51a449aa2c5"
	I0923 04:33:27.984934   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:33:27.984945   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:33:28.025327   20917 logs.go:123] Gathering logs for kube-apiserver [93e4d9301086] ...
	I0923 04:33:28.025336   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e4d9301086"
	I0923 04:33:28.039276   20917 logs.go:123] Gathering logs for etcd [a8024eb2d8d3] ...
	I0923 04:33:28.039290   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8024eb2d8d3"
	I0923 04:33:28.052889   20917 logs.go:123] Gathering logs for kube-scheduler [68ba318290a3] ...
	I0923 04:33:28.052900   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ba318290a3"
	I0923 04:33:28.065797   20917 logs.go:123] Gathering logs for kube-proxy [b6032b1a7d80] ...
	I0923 04:33:28.065810   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6032b1a7d80"
	I0923 04:33:28.077323   20917 logs.go:123] Gathering logs for kube-proxy [23c22d92b606] ...
	I0923 04:33:28.077333   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23c22d92b606"
	I0923 04:33:28.089068   20917 logs.go:123] Gathering logs for kube-controller-manager [058b9aa76c09] ...
	I0923 04:33:28.089078   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 058b9aa76c09"
	I0923 04:33:28.107056   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:33:28.107067   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:33:28.119693   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:33:28.119703   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:33:28.124248   20917 logs.go:123] Gathering logs for coredns [b6c7f17b7630] ...
	I0923 04:33:28.124256   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6c7f17b7630"
	I0923 04:33:28.135120   20917 logs.go:123] Gathering logs for coredns [a0c59653f70b] ...
	I0923 04:33:28.135129   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0c59653f70b"
	I0923 04:33:28.152236   20917 logs.go:123] Gathering logs for kube-scheduler [3ba9dfb1baa7] ...
	I0923 04:33:28.152247   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba9dfb1baa7"
	I0923 04:33:28.165295   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:33:28.165310   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:33:28.189083   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:33:28.189090   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:33:28.225559   20917 logs.go:123] Gathering logs for kube-apiserver [cf19ba1df3cc] ...
	I0923 04:33:28.225571   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf19ba1df3cc"
	I0923 04:33:28.263643   20917 logs.go:123] Gathering logs for kube-controller-manager [83303c6938f1] ...
	I0923 04:33:28.263654   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83303c6938f1"
	I0923 04:33:30.777986   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:33:35.780409   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:33:35.780564   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:33:35.796383   20917 logs.go:276] 2 containers: [93e4d9301086 cf19ba1df3cc]
	I0923 04:33:35.796486   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:33:35.808886   20917 logs.go:276] 2 containers: [a8024eb2d8d3 7327067cf282]
	I0923 04:33:35.808971   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:33:35.819807   20917 logs.go:276] 2 containers: [b6c7f17b7630 a0c59653f70b]
	I0923 04:33:35.819892   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:33:35.830915   20917 logs.go:276] 2 containers: [68ba318290a3 3ba9dfb1baa7]
	I0923 04:33:35.831000   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:33:35.841310   20917 logs.go:276] 2 containers: [b6032b1a7d80 23c22d92b606]
	I0923 04:33:35.841392   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:33:35.851646   20917 logs.go:276] 2 containers: [058b9aa76c09 83303c6938f1]
	I0923 04:33:35.851726   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:33:35.862503   20917 logs.go:276] 0 containers: []
	W0923 04:33:35.862520   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:33:35.862589   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:33:35.873356   20917 logs.go:276] 2 containers: [65f86845d38b d51a449aa2c5]
	I0923 04:33:35.873372   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:33:35.873378   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:33:35.877971   20917 logs.go:123] Gathering logs for coredns [b6c7f17b7630] ...
	I0923 04:33:35.877977   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6c7f17b7630"
	I0923 04:33:35.889469   20917 logs.go:123] Gathering logs for coredns [a0c59653f70b] ...
	I0923 04:33:35.889482   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0c59653f70b"
	I0923 04:33:35.901015   20917 logs.go:123] Gathering logs for storage-provisioner [65f86845d38b] ...
	I0923 04:33:35.901027   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65f86845d38b"
	I0923 04:33:35.912337   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:33:35.912347   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:33:35.924781   20917 logs.go:123] Gathering logs for kube-apiserver [93e4d9301086] ...
	I0923 04:33:35.924792   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e4d9301086"
	I0923 04:33:35.938626   20917 logs.go:123] Gathering logs for kube-apiserver [cf19ba1df3cc] ...
	I0923 04:33:35.938637   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf19ba1df3cc"
	I0923 04:33:35.977744   20917 logs.go:123] Gathering logs for kube-scheduler [3ba9dfb1baa7] ...
	I0923 04:33:35.977760   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba9dfb1baa7"
	I0923 04:33:35.992144   20917 logs.go:123] Gathering logs for kube-proxy [b6032b1a7d80] ...
	I0923 04:33:35.992154   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6032b1a7d80"
	I0923 04:33:36.004377   20917 logs.go:123] Gathering logs for kube-controller-manager [83303c6938f1] ...
	I0923 04:33:36.004389   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83303c6938f1"
	I0923 04:33:36.016142   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:33:36.016153   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:33:36.053785   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:33:36.053795   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:33:36.088559   20917 logs.go:123] Gathering logs for etcd [a8024eb2d8d3] ...
	I0923 04:33:36.088575   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8024eb2d8d3"
	I0923 04:33:36.102426   20917 logs.go:123] Gathering logs for kube-scheduler [68ba318290a3] ...
	I0923 04:33:36.102435   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ba318290a3"
	I0923 04:33:36.114181   20917 logs.go:123] Gathering logs for kube-controller-manager [058b9aa76c09] ...
	I0923 04:33:36.114190   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 058b9aa76c09"
	I0923 04:33:36.131007   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:33:36.131020   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:33:36.155461   20917 logs.go:123] Gathering logs for etcd [7327067cf282] ...
	I0923 04:33:36.155471   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7327067cf282"
	I0923 04:33:36.179643   20917 logs.go:123] Gathering logs for kube-proxy [23c22d92b606] ...
	I0923 04:33:36.179653   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23c22d92b606"
	I0923 04:33:36.190905   20917 logs.go:123] Gathering logs for storage-provisioner [d51a449aa2c5] ...
	I0923 04:33:36.190918   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d51a449aa2c5"
	I0923 04:33:38.704780   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:33:43.707118   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:33:43.707296   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:33:43.719547   20917 logs.go:276] 2 containers: [93e4d9301086 cf19ba1df3cc]
	I0923 04:33:43.719640   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:33:43.730349   20917 logs.go:276] 2 containers: [a8024eb2d8d3 7327067cf282]
	I0923 04:33:43.730476   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:33:43.749203   20917 logs.go:276] 2 containers: [b6c7f17b7630 a0c59653f70b]
	I0923 04:33:43.749285   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:33:43.759711   20917 logs.go:276] 2 containers: [68ba318290a3 3ba9dfb1baa7]
	I0923 04:33:43.759796   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:33:43.770667   20917 logs.go:276] 2 containers: [b6032b1a7d80 23c22d92b606]
	I0923 04:33:43.770749   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:33:43.781451   20917 logs.go:276] 2 containers: [058b9aa76c09 83303c6938f1]
	I0923 04:33:43.781534   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:33:43.793187   20917 logs.go:276] 0 containers: []
	W0923 04:33:43.793197   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:33:43.793270   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:33:43.803691   20917 logs.go:276] 2 containers: [65f86845d38b d51a449aa2c5]
	I0923 04:33:43.803704   20917 logs.go:123] Gathering logs for kube-proxy [23c22d92b606] ...
	I0923 04:33:43.803710   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23c22d92b606"
	I0923 04:33:43.815263   20917 logs.go:123] Gathering logs for kube-scheduler [68ba318290a3] ...
	I0923 04:33:43.815272   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ba318290a3"
	I0923 04:33:43.827018   20917 logs.go:123] Gathering logs for kube-scheduler [3ba9dfb1baa7] ...
	I0923 04:33:43.827033   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba9dfb1baa7"
	I0923 04:33:43.839033   20917 logs.go:123] Gathering logs for kube-controller-manager [058b9aa76c09] ...
	I0923 04:33:43.839041   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 058b9aa76c09"
	I0923 04:33:43.857217   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:33:43.857230   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:33:43.869388   20917 logs.go:123] Gathering logs for etcd [a8024eb2d8d3] ...
	I0923 04:33:43.869402   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8024eb2d8d3"
	I0923 04:33:43.889638   20917 logs.go:123] Gathering logs for coredns [b6c7f17b7630] ...
	I0923 04:33:43.889651   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6c7f17b7630"
	I0923 04:33:43.901637   20917 logs.go:123] Gathering logs for kube-controller-manager [83303c6938f1] ...
	I0923 04:33:43.901647   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83303c6938f1"
	I0923 04:33:43.913740   20917 logs.go:123] Gathering logs for storage-provisioner [65f86845d38b] ...
	I0923 04:33:43.913751   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65f86845d38b"
	I0923 04:33:43.924870   20917 logs.go:123] Gathering logs for storage-provisioner [d51a449aa2c5] ...
	I0923 04:33:43.924884   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d51a449aa2c5"
	I0923 04:33:43.936660   20917 logs.go:123] Gathering logs for kube-apiserver [93e4d9301086] ...
	I0923 04:33:43.936670   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e4d9301086"
	I0923 04:33:43.956848   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:33:43.956856   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:33:43.961613   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:33:43.961619   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:33:44.005815   20917 logs.go:123] Gathering logs for kube-apiserver [cf19ba1df3cc] ...
	I0923 04:33:44.005827   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf19ba1df3cc"
	I0923 04:33:44.043564   20917 logs.go:123] Gathering logs for etcd [7327067cf282] ...
	I0923 04:33:44.043575   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7327067cf282"
	I0923 04:33:44.058837   20917 logs.go:123] Gathering logs for coredns [a0c59653f70b] ...
	I0923 04:33:44.058850   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0c59653f70b"
	I0923 04:33:44.070120   20917 logs.go:123] Gathering logs for kube-proxy [b6032b1a7d80] ...
	I0923 04:33:44.070131   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6032b1a7d80"
	I0923 04:33:44.082329   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:33:44.082341   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:33:44.107516   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:33:44.107527   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:33:46.649446   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:33:51.652022   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:33:51.652269   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:33:51.672440   20917 logs.go:276] 2 containers: [93e4d9301086 cf19ba1df3cc]
	I0923 04:33:51.672557   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:33:51.688200   20917 logs.go:276] 2 containers: [a8024eb2d8d3 7327067cf282]
	I0923 04:33:51.688284   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:33:51.700519   20917 logs.go:276] 2 containers: [b6c7f17b7630 a0c59653f70b]
	I0923 04:33:51.700613   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:33:51.711811   20917 logs.go:276] 2 containers: [68ba318290a3 3ba9dfb1baa7]
	I0923 04:33:51.711898   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:33:51.722702   20917 logs.go:276] 2 containers: [b6032b1a7d80 23c22d92b606]
	I0923 04:33:51.722783   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:33:51.733690   20917 logs.go:276] 2 containers: [058b9aa76c09 83303c6938f1]
	I0923 04:33:51.733777   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:33:51.744818   20917 logs.go:276] 0 containers: []
	W0923 04:33:51.744831   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:33:51.744907   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:33:51.756289   20917 logs.go:276] 2 containers: [65f86845d38b d51a449aa2c5]
	I0923 04:33:51.756304   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:33:51.756312   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:33:51.795657   20917 logs.go:123] Gathering logs for kube-apiserver [93e4d9301086] ...
	I0923 04:33:51.795664   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e4d9301086"
	I0923 04:33:51.814368   20917 logs.go:123] Gathering logs for coredns [a0c59653f70b] ...
	I0923 04:33:51.814379   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0c59653f70b"
	I0923 04:33:51.828130   20917 logs.go:123] Gathering logs for kube-scheduler [68ba318290a3] ...
	I0923 04:33:51.828141   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ba318290a3"
	I0923 04:33:51.840598   20917 logs.go:123] Gathering logs for kube-scheduler [3ba9dfb1baa7] ...
	I0923 04:33:51.840608   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba9dfb1baa7"
	I0923 04:33:51.852481   20917 logs.go:123] Gathering logs for kube-controller-manager [058b9aa76c09] ...
	I0923 04:33:51.852492   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 058b9aa76c09"
	I0923 04:33:51.870896   20917 logs.go:123] Gathering logs for storage-provisioner [65f86845d38b] ...
	I0923 04:33:51.870905   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65f86845d38b"
	I0923 04:33:51.882331   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:33:51.882341   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:33:51.907094   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:33:51.907105   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:33:51.918876   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:33:51.918890   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:33:51.923611   20917 logs.go:123] Gathering logs for kube-controller-manager [83303c6938f1] ...
	I0923 04:33:51.923617   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83303c6938f1"
	I0923 04:33:51.935340   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:33:51.935350   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:33:51.971948   20917 logs.go:123] Gathering logs for etcd [7327067cf282] ...
	I0923 04:33:51.971957   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7327067cf282"
	I0923 04:33:51.996198   20917 logs.go:123] Gathering logs for coredns [b6c7f17b7630] ...
	I0923 04:33:51.996215   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6c7f17b7630"
	I0923 04:33:52.008130   20917 logs.go:123] Gathering logs for kube-proxy [b6032b1a7d80] ...
	I0923 04:33:52.008140   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6032b1a7d80"
	I0923 04:33:52.019979   20917 logs.go:123] Gathering logs for kube-apiserver [cf19ba1df3cc] ...
	I0923 04:33:52.019989   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf19ba1df3cc"
	I0923 04:33:52.063070   20917 logs.go:123] Gathering logs for etcd [a8024eb2d8d3] ...
	I0923 04:33:52.063082   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8024eb2d8d3"
	I0923 04:33:52.077079   20917 logs.go:123] Gathering logs for kube-proxy [23c22d92b606] ...
	I0923 04:33:52.077093   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23c22d92b606"
	I0923 04:33:52.088559   20917 logs.go:123] Gathering logs for storage-provisioner [d51a449aa2c5] ...
	I0923 04:33:52.088575   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d51a449aa2c5"
	I0923 04:33:54.602081   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:33:59.604363   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:33:59.604487   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:33:59.616501   20917 logs.go:276] 2 containers: [93e4d9301086 cf19ba1df3cc]
	I0923 04:33:59.616581   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:33:59.628452   20917 logs.go:276] 2 containers: [a8024eb2d8d3 7327067cf282]
	I0923 04:33:59.628539   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:33:59.640406   20917 logs.go:276] 2 containers: [b6c7f17b7630 a0c59653f70b]
	I0923 04:33:59.640491   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:33:59.656900   20917 logs.go:276] 2 containers: [68ba318290a3 3ba9dfb1baa7]
	I0923 04:33:59.656976   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:33:59.667313   20917 logs.go:276] 2 containers: [b6032b1a7d80 23c22d92b606]
	I0923 04:33:59.667401   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:33:59.682847   20917 logs.go:276] 2 containers: [058b9aa76c09 83303c6938f1]
	I0923 04:33:59.682930   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:33:59.693152   20917 logs.go:276] 0 containers: []
	W0923 04:33:59.693165   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:33:59.693242   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:33:59.704039   20917 logs.go:276] 2 containers: [65f86845d38b d51a449aa2c5]
	I0923 04:33:59.704056   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:33:59.704062   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:33:59.741365   20917 logs.go:123] Gathering logs for coredns [b6c7f17b7630] ...
	I0923 04:33:59.741381   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6c7f17b7630"
	I0923 04:33:59.752892   20917 logs.go:123] Gathering logs for storage-provisioner [d51a449aa2c5] ...
	I0923 04:33:59.752902   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d51a449aa2c5"
	I0923 04:33:59.764796   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:33:59.764808   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:33:59.776846   20917 logs.go:123] Gathering logs for kube-apiserver [93e4d9301086] ...
	I0923 04:33:59.776856   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e4d9301086"
	I0923 04:33:59.791914   20917 logs.go:123] Gathering logs for kube-scheduler [68ba318290a3] ...
	I0923 04:33:59.791925   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ba318290a3"
	I0923 04:33:59.803967   20917 logs.go:123] Gathering logs for kube-scheduler [3ba9dfb1baa7] ...
	I0923 04:33:59.803982   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba9dfb1baa7"
	I0923 04:33:59.815553   20917 logs.go:123] Gathering logs for kube-controller-manager [83303c6938f1] ...
	I0923 04:33:59.815564   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83303c6938f1"
	I0923 04:33:59.827159   20917 logs.go:123] Gathering logs for storage-provisioner [65f86845d38b] ...
	I0923 04:33:59.827174   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65f86845d38b"
	I0923 04:33:59.849196   20917 logs.go:123] Gathering logs for kube-proxy [23c22d92b606] ...
	I0923 04:33:59.849206   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23c22d92b606"
	I0923 04:33:59.861081   20917 logs.go:123] Gathering logs for kube-controller-manager [058b9aa76c09] ...
	I0923 04:33:59.861092   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 058b9aa76c09"
	I0923 04:33:59.878246   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:33:59.878255   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:33:59.916892   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:33:59.916903   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:33:59.921881   20917 logs.go:123] Gathering logs for kube-apiserver [cf19ba1df3cc] ...
	I0923 04:33:59.921890   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf19ba1df3cc"
	I0923 04:33:59.960834   20917 logs.go:123] Gathering logs for coredns [a0c59653f70b] ...
	I0923 04:33:59.960846   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0c59653f70b"
	I0923 04:33:59.973884   20917 logs.go:123] Gathering logs for kube-proxy [b6032b1a7d80] ...
	I0923 04:33:59.973894   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6032b1a7d80"
	I0923 04:33:59.985769   20917 logs.go:123] Gathering logs for etcd [a8024eb2d8d3] ...
	I0923 04:33:59.985781   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8024eb2d8d3"
	I0923 04:34:00.000042   20917 logs.go:123] Gathering logs for etcd [7327067cf282] ...
	I0923 04:34:00.000054   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7327067cf282"
	I0923 04:34:00.014095   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:34:00.014113   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:34:02.540677   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:34:07.543313   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:34:07.543449   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:34:07.557613   20917 logs.go:276] 2 containers: [93e4d9301086 cf19ba1df3cc]
	I0923 04:34:07.557696   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:34:07.568857   20917 logs.go:276] 2 containers: [a8024eb2d8d3 7327067cf282]
	I0923 04:34:07.568942   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:34:07.580258   20917 logs.go:276] 2 containers: [b6c7f17b7630 a0c59653f70b]
	I0923 04:34:07.580336   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:34:07.591007   20917 logs.go:276] 2 containers: [68ba318290a3 3ba9dfb1baa7]
	I0923 04:34:07.591094   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:34:07.602204   20917 logs.go:276] 2 containers: [b6032b1a7d80 23c22d92b606]
	I0923 04:34:07.602297   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:34:07.618165   20917 logs.go:276] 2 containers: [058b9aa76c09 83303c6938f1]
	I0923 04:34:07.618250   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:34:07.628619   20917 logs.go:276] 0 containers: []
	W0923 04:34:07.628629   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:34:07.628702   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:34:07.639554   20917 logs.go:276] 2 containers: [65f86845d38b d51a449aa2c5]
	I0923 04:34:07.639570   20917 logs.go:123] Gathering logs for kube-controller-manager [058b9aa76c09] ...
	I0923 04:34:07.639575   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 058b9aa76c09"
	I0923 04:34:07.659571   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:34:07.659586   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:34:07.684891   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:34:07.684918   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:34:07.725062   20917 logs.go:123] Gathering logs for etcd [a8024eb2d8d3] ...
	I0923 04:34:07.725086   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8024eb2d8d3"
	I0923 04:34:07.740869   20917 logs.go:123] Gathering logs for kube-scheduler [68ba318290a3] ...
	I0923 04:34:07.740883   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ba318290a3"
	I0923 04:34:07.754398   20917 logs.go:123] Gathering logs for kube-scheduler [3ba9dfb1baa7] ...
	I0923 04:34:07.754410   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba9dfb1baa7"
	I0923 04:34:07.770916   20917 logs.go:123] Gathering logs for storage-provisioner [d51a449aa2c5] ...
	I0923 04:34:07.770927   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d51a449aa2c5"
	I0923 04:34:07.783351   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:34:07.783363   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:34:07.820730   20917 logs.go:123] Gathering logs for kube-proxy [23c22d92b606] ...
	I0923 04:34:07.820742   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23c22d92b606"
	I0923 04:34:07.833341   20917 logs.go:123] Gathering logs for kube-controller-manager [83303c6938f1] ...
	I0923 04:34:07.833357   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83303c6938f1"
	I0923 04:34:07.848609   20917 logs.go:123] Gathering logs for storage-provisioner [65f86845d38b] ...
	I0923 04:34:07.848621   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65f86845d38b"
	I0923 04:34:07.860552   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:34:07.860567   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:34:07.864933   20917 logs.go:123] Gathering logs for etcd [7327067cf282] ...
	I0923 04:34:07.864938   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7327067cf282"
	I0923 04:34:07.881174   20917 logs.go:123] Gathering logs for coredns [a0c59653f70b] ...
	I0923 04:34:07.881189   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0c59653f70b"
	I0923 04:34:07.898413   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:34:07.898429   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:34:07.915029   20917 logs.go:123] Gathering logs for kube-apiserver [93e4d9301086] ...
	I0923 04:34:07.915043   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e4d9301086"
	I0923 04:34:07.930466   20917 logs.go:123] Gathering logs for kube-apiserver [cf19ba1df3cc] ...
	I0923 04:34:07.930481   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf19ba1df3cc"
	I0923 04:34:07.969792   20917 logs.go:123] Gathering logs for coredns [b6c7f17b7630] ...
	I0923 04:34:07.969804   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6c7f17b7630"
	I0923 04:34:07.982459   20917 logs.go:123] Gathering logs for kube-proxy [b6032b1a7d80] ...
	I0923 04:34:07.982472   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6032b1a7d80"
	I0923 04:34:10.497338   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:34:15.499629   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:34:15.499913   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:34:15.518660   20917 logs.go:276] 2 containers: [93e4d9301086 cf19ba1df3cc]
	I0923 04:34:15.518774   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:34:15.532626   20917 logs.go:276] 2 containers: [a8024eb2d8d3 7327067cf282]
	I0923 04:34:15.532710   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:34:15.544871   20917 logs.go:276] 2 containers: [b6c7f17b7630 a0c59653f70b]
	I0923 04:34:15.544967   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:34:15.555225   20917 logs.go:276] 2 containers: [68ba318290a3 3ba9dfb1baa7]
	I0923 04:34:15.555295   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:34:15.565565   20917 logs.go:276] 2 containers: [b6032b1a7d80 23c22d92b606]
	I0923 04:34:15.565650   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:34:15.575977   20917 logs.go:276] 2 containers: [058b9aa76c09 83303c6938f1]
	I0923 04:34:15.576051   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:34:15.586505   20917 logs.go:276] 0 containers: []
	W0923 04:34:15.586518   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:34:15.586594   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:34:15.597404   20917 logs.go:276] 2 containers: [65f86845d38b d51a449aa2c5]
	I0923 04:34:15.597419   20917 logs.go:123] Gathering logs for kube-proxy [23c22d92b606] ...
	I0923 04:34:15.597424   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23c22d92b606"
	I0923 04:34:15.609429   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:34:15.609443   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:34:15.623584   20917 logs.go:123] Gathering logs for kube-proxy [b6032b1a7d80] ...
	I0923 04:34:15.623596   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6032b1a7d80"
	I0923 04:34:15.635733   20917 logs.go:123] Gathering logs for etcd [a8024eb2d8d3] ...
	I0923 04:34:15.635747   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8024eb2d8d3"
	I0923 04:34:15.650071   20917 logs.go:123] Gathering logs for coredns [b6c7f17b7630] ...
	I0923 04:34:15.650094   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6c7f17b7630"
	I0923 04:34:15.661805   20917 logs.go:123] Gathering logs for kube-scheduler [68ba318290a3] ...
	I0923 04:34:15.661819   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 68ba318290a3"
	I0923 04:34:15.673550   20917 logs.go:123] Gathering logs for storage-provisioner [65f86845d38b] ...
	I0923 04:34:15.673565   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65f86845d38b"
	I0923 04:34:15.685064   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:34:15.685078   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:34:15.707191   20917 logs.go:123] Gathering logs for kube-apiserver [93e4d9301086] ...
	I0923 04:34:15.707201   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e4d9301086"
	I0923 04:34:15.721497   20917 logs.go:123] Gathering logs for etcd [7327067cf282] ...
	I0923 04:34:15.721508   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7327067cf282"
	I0923 04:34:15.736023   20917 logs.go:123] Gathering logs for coredns [a0c59653f70b] ...
	I0923 04:34:15.736033   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0c59653f70b"
	I0923 04:34:15.748259   20917 logs.go:123] Gathering logs for kube-scheduler [3ba9dfb1baa7] ...
	I0923 04:34:15.748269   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ba9dfb1baa7"
	I0923 04:34:15.759976   20917 logs.go:123] Gathering logs for kube-controller-manager [058b9aa76c09] ...
	I0923 04:34:15.759986   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 058b9aa76c09"
	I0923 04:34:15.781552   20917 logs.go:123] Gathering logs for storage-provisioner [d51a449aa2c5] ...
	I0923 04:34:15.781565   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d51a449aa2c5"
	I0923 04:34:15.797633   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:34:15.797645   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:34:15.837041   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:34:15.837053   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:34:15.841919   20917 logs.go:123] Gathering logs for kube-apiserver [cf19ba1df3cc] ...
	I0923 04:34:15.841925   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf19ba1df3cc"
	I0923 04:34:15.879629   20917 logs.go:123] Gathering logs for kube-controller-manager [83303c6938f1] ...
	I0923 04:34:15.879641   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83303c6938f1"
	I0923 04:34:15.891339   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:34:15.891349   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:34:18.433754   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:34:23.436088   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:34:23.436155   20917 kubeadm.go:597] duration metric: took 4m9.199180208s to restartPrimaryControlPlane
	W0923 04:34:23.436216   20917 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0923 04:34:23.436246   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0923 04:34:24.613656   20917 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.177404125s)
	I0923 04:34:24.613741   20917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 04:34:24.618817   20917 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 04:34:24.621757   20917 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 04:34:24.624856   20917 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 04:34:24.624867   20917 kubeadm.go:157] found existing configuration files:
	
	I0923 04:34:24.624916   20917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53371 /etc/kubernetes/admin.conf
	I0923 04:34:24.627924   20917 kubeadm.go:163] "https://control-plane.minikube.internal:53371" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53371 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 04:34:24.627987   20917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 04:34:24.630859   20917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53371 /etc/kubernetes/kubelet.conf
	I0923 04:34:24.633561   20917 kubeadm.go:163] "https://control-plane.minikube.internal:53371" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53371 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 04:34:24.633587   20917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 04:34:24.635957   20917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53371 /etc/kubernetes/controller-manager.conf
	I0923 04:34:24.638659   20917 kubeadm.go:163] "https://control-plane.minikube.internal:53371" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53371 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 04:34:24.638697   20917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 04:34:24.641600   20917 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53371 /etc/kubernetes/scheduler.conf
	I0923 04:34:24.644367   20917 kubeadm.go:163] "https://control-plane.minikube.internal:53371" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53371 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 04:34:24.644401   20917 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 04:34:24.646922   20917 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 04:34:24.665380   20917 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0923 04:34:24.665465   20917 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 04:34:24.716167   20917 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 04:34:24.716330   20917 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 04:34:24.716394   20917 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0923 04:34:24.771121   20917 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 04:34:24.775322   20917 out.go:235]   - Generating certificates and keys ...
	I0923 04:34:24.775430   20917 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 04:34:24.775538   20917 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 04:34:24.775633   20917 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0923 04:34:24.775715   20917 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0923 04:34:24.775814   20917 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0923 04:34:24.775901   20917 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0923 04:34:24.775944   20917 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0923 04:34:24.775978   20917 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0923 04:34:24.776016   20917 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0923 04:34:24.776078   20917 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0923 04:34:24.776101   20917 kubeadm.go:310] [certs] Using the existing "sa" key
	I0923 04:34:24.776148   20917 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 04:34:24.901332   20917 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 04:34:25.174618   20917 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 04:34:25.299186   20917 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 04:34:25.350523   20917 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 04:34:25.380646   20917 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 04:34:25.381010   20917 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 04:34:25.381031   20917 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 04:34:25.471890   20917 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 04:34:25.476145   20917 out.go:235]   - Booting up control plane ...
	I0923 04:34:25.476196   20917 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 04:34:25.476234   20917 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 04:34:25.476269   20917 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 04:34:25.476311   20917 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 04:34:25.476404   20917 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0923 04:34:30.476936   20917 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.001783 seconds
	I0923 04:34:30.477005   20917 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 04:34:30.480786   20917 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 04:34:30.996759   20917 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 04:34:30.997145   20917 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-903000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 04:34:31.502475   20917 kubeadm.go:310] [bootstrap-token] Using token: wp1uhz.1z6k2b503pwtd74f
	I0923 04:34:31.509173   20917 out.go:235]   - Configuring RBAC rules ...
	I0923 04:34:31.509257   20917 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 04:34:31.509316   20917 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 04:34:31.513762   20917 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 04:34:31.514891   20917 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 04:34:31.515984   20917 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 04:34:31.520499   20917 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 04:34:31.524625   20917 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 04:34:31.707150   20917 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 04:34:31.907487   20917 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 04:34:31.907897   20917 kubeadm.go:310] 
	I0923 04:34:31.907926   20917 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 04:34:31.907930   20917 kubeadm.go:310] 
	I0923 04:34:31.907964   20917 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 04:34:31.907968   20917 kubeadm.go:310] 
	I0923 04:34:31.907979   20917 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 04:34:31.908030   20917 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 04:34:31.908060   20917 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 04:34:31.908065   20917 kubeadm.go:310] 
	I0923 04:34:31.908093   20917 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 04:34:31.908097   20917 kubeadm.go:310] 
	I0923 04:34:31.908122   20917 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 04:34:31.908127   20917 kubeadm.go:310] 
	I0923 04:34:31.908153   20917 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 04:34:31.908195   20917 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 04:34:31.908240   20917 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 04:34:31.908243   20917 kubeadm.go:310] 
	I0923 04:34:31.908285   20917 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 04:34:31.908331   20917 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 04:34:31.908336   20917 kubeadm.go:310] 
	I0923 04:34:31.908384   20917 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wp1uhz.1z6k2b503pwtd74f \
	I0923 04:34:31.908442   20917 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a5393725c1ebf724a26137eacec694c8d322652550455bc31dd6da673086408b \
	I0923 04:34:31.908460   20917 kubeadm.go:310] 	--control-plane 
	I0923 04:34:31.908465   20917 kubeadm.go:310] 
	I0923 04:34:31.908515   20917 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 04:34:31.908520   20917 kubeadm.go:310] 
	I0923 04:34:31.908567   20917 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wp1uhz.1z6k2b503pwtd74f \
	I0923 04:34:31.908627   20917 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a5393725c1ebf724a26137eacec694c8d322652550455bc31dd6da673086408b 
	I0923 04:34:31.908691   20917 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 04:34:31.908722   20917 cni.go:84] Creating CNI manager for ""
	I0923 04:34:31.908731   20917 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 04:34:31.913292   20917 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0923 04:34:31.921195   20917 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0923 04:34:31.924470   20917 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0923 04:34:31.929310   20917 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 04:34:31.929357   20917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 04:34:31.929373   20917 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-903000 minikube.k8s.io/updated_at=2024_09_23T04_34_31_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1 minikube.k8s.io/name=running-upgrade-903000 minikube.k8s.io/primary=true
	I0923 04:34:31.972416   20917 ops.go:34] apiserver oom_adj: -16
	I0923 04:34:31.972414   20917 kubeadm.go:1113] duration metric: took 43.097334ms to wait for elevateKubeSystemPrivileges
	I0923 04:34:31.972434   20917 kubeadm.go:394] duration metric: took 4m17.750461709s to StartCluster
	I0923 04:34:31.972445   20917 settings.go:142] acquiring lock: {Name:mkf31abe3bf81ad5b4da1674523af9683936735a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 04:34:31.972527   20917 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:34:31.972915   20917 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19690-18362/kubeconfig: {Name:mke35d42fdea9892a3eb00f2ea9c8fc1f44681bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 04:34:31.973117   20917 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:34:31.973121   20917 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0923 04:34:31.973154   20917 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-903000"
	I0923 04:34:31.973161   20917 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-903000"
	W0923 04:34:31.973165   20917 addons.go:243] addon storage-provisioner should already be in state true
	I0923 04:34:31.973176   20917 host.go:66] Checking if "running-upgrade-903000" exists ...
	I0923 04:34:31.973202   20917 config.go:182] Loaded profile config "running-upgrade-903000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 04:34:31.973225   20917 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-903000"
	I0923 04:34:31.973231   20917 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-903000"
	I0923 04:34:31.974051   20917 kapi.go:59] client config for running-upgrade-903000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/running-upgrade-903000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/running-upgrade-903000/client.key", CAFile:"/Users/jenkins/minikube-integration/19690-18362/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103a0a030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 04:34:31.974173   20917 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-903000"
	W0923 04:34:31.974178   20917 addons.go:243] addon default-storageclass should already be in state true
	I0923 04:34:31.974184   20917 host.go:66] Checking if "running-upgrade-903000" exists ...
	I0923 04:34:31.977279   20917 out.go:177] * Verifying Kubernetes components...
	I0923 04:34:31.977572   20917 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 04:34:31.981328   20917 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 04:34:31.981334   20917 sshutil.go:53] new ssh client: &{IP:localhost Port:53278 SSHKeyPath:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/running-upgrade-903000/id_rsa Username:docker}
	I0923 04:34:31.985255   20917 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 04:34:31.988167   20917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 04:34:31.991217   20917 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 04:34:31.991224   20917 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 04:34:31.991229   20917 sshutil.go:53] new ssh client: &{IP:localhost Port:53278 SSHKeyPath:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/running-upgrade-903000/id_rsa Username:docker}
	I0923 04:34:32.069235   20917 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 04:34:32.076538   20917 api_server.go:52] waiting for apiserver process to appear ...
	I0923 04:34:32.076610   20917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 04:34:32.080565   20917 api_server.go:72] duration metric: took 107.438709ms to wait for apiserver process to appear ...
	I0923 04:34:32.080572   20917 api_server.go:88] waiting for apiserver healthz status ...
	I0923 04:34:32.080583   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:34:32.116238   20917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 04:34:32.129309   20917 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 04:34:32.472900   20917 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0923 04:34:32.472911   20917 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0923 04:34:37.082626   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:34:37.082650   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:34:42.082830   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:34:42.082864   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:34:47.083224   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:34:47.083260   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:34:52.084112   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:34:52.084164   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:34:57.084848   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:34:57.084877   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:35:02.085308   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:35:02.085355   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0923 04:35:02.475025   20917 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0923 04:35:02.480241   20917 out.go:177] * Enabled addons: storage-provisioner
	I0923 04:35:02.491128   20917 addons.go:510] duration metric: took 30.518144917s for enable addons: enabled=[storage-provisioner]
	I0923 04:35:07.085858   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:35:07.085903   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:35:12.086679   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:35:12.086701   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:35:17.087954   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:35:17.087994   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:35:22.090084   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:35:22.090110   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:35:27.092246   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:35:27.092300   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:35:32.094505   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:35:32.094633   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:35:32.117548   20917 logs.go:276] 1 containers: [47f6323d8821]
	I0923 04:35:32.117644   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:35:32.130074   20917 logs.go:276] 1 containers: [ea5ce8e910e0]
	I0923 04:35:32.130159   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:35:32.140110   20917 logs.go:276] 2 containers: [93e3a4effe7c 2461d3ccacc3]
	I0923 04:35:32.140410   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:35:32.152602   20917 logs.go:276] 1 containers: [e23bcd933edc]
	I0923 04:35:32.152685   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:35:32.163011   20917 logs.go:276] 1 containers: [9f715c76cce0]
	I0923 04:35:32.163091   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:35:32.173824   20917 logs.go:276] 1 containers: [775ac458282b]
	I0923 04:35:32.173914   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:35:32.184634   20917 logs.go:276] 0 containers: []
	W0923 04:35:32.184644   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:35:32.184713   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:35:32.195481   20917 logs.go:276] 1 containers: [261771dabdad]
	I0923 04:35:32.195499   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:35:32.195506   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:35:32.232106   20917 logs.go:123] Gathering logs for kube-apiserver [47f6323d8821] ...
	I0923 04:35:32.232117   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47f6323d8821"
	I0923 04:35:32.247833   20917 logs.go:123] Gathering logs for etcd [ea5ce8e910e0] ...
	I0923 04:35:32.247846   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5ce8e910e0"
	I0923 04:35:32.261984   20917 logs.go:123] Gathering logs for coredns [93e3a4effe7c] ...
	I0923 04:35:32.261993   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e3a4effe7c"
	I0923 04:35:32.273681   20917 logs.go:123] Gathering logs for coredns [2461d3ccacc3] ...
	I0923 04:35:32.273690   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2461d3ccacc3"
	I0923 04:35:32.286123   20917 logs.go:123] Gathering logs for kube-scheduler [e23bcd933edc] ...
	I0923 04:35:32.286132   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23bcd933edc"
	I0923 04:35:32.301176   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:35:32.301190   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:35:32.335560   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:35:32.335568   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:35:32.340321   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:35:32.340328   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:35:32.351491   20917 logs.go:123] Gathering logs for kube-controller-manager [775ac458282b] ...
	I0923 04:35:32.351502   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 775ac458282b"
	I0923 04:35:32.370077   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:35:32.370094   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:35:32.393674   20917 logs.go:123] Gathering logs for kube-proxy [9f715c76cce0] ...
	I0923 04:35:32.393682   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f715c76cce0"
	I0923 04:35:32.405201   20917 logs.go:123] Gathering logs for storage-provisioner [261771dabdad] ...
	I0923 04:35:32.405216   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261771dabdad"
	I0923 04:35:34.918734   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:35:39.920980   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:35:39.921147   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:35:39.936300   20917 logs.go:276] 1 containers: [47f6323d8821]
	I0923 04:35:39.936396   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:35:39.948687   20917 logs.go:276] 1 containers: [ea5ce8e910e0]
	I0923 04:35:39.948780   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:35:39.959521   20917 logs.go:276] 2 containers: [93e3a4effe7c 2461d3ccacc3]
	I0923 04:35:39.959600   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:35:39.969898   20917 logs.go:276] 1 containers: [e23bcd933edc]
	I0923 04:35:39.969979   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:35:39.980650   20917 logs.go:276] 1 containers: [9f715c76cce0]
	I0923 04:35:39.980736   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:35:39.991025   20917 logs.go:276] 1 containers: [775ac458282b]
	I0923 04:35:39.991114   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:35:40.001153   20917 logs.go:276] 0 containers: []
	W0923 04:35:40.001164   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:35:40.001227   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:35:40.011615   20917 logs.go:276] 1 containers: [261771dabdad]
	I0923 04:35:40.011632   20917 logs.go:123] Gathering logs for kube-controller-manager [775ac458282b] ...
	I0923 04:35:40.011637   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 775ac458282b"
	I0923 04:35:40.029569   20917 logs.go:123] Gathering logs for storage-provisioner [261771dabdad] ...
	I0923 04:35:40.029580   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261771dabdad"
	I0923 04:35:40.041112   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:35:40.041123   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:35:40.064869   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:35:40.064878   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:35:40.069273   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:35:40.069280   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:35:40.104058   20917 logs.go:123] Gathering logs for etcd [ea5ce8e910e0] ...
	I0923 04:35:40.104070   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5ce8e910e0"
	I0923 04:35:40.119773   20917 logs.go:123] Gathering logs for kube-proxy [9f715c76cce0] ...
	I0923 04:35:40.119788   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f715c76cce0"
	I0923 04:35:40.132356   20917 logs.go:123] Gathering logs for kube-scheduler [e23bcd933edc] ...
	I0923 04:35:40.132366   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23bcd933edc"
	I0923 04:35:40.147175   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:35:40.147184   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:35:40.159391   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:35:40.159401   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:35:40.194136   20917 logs.go:123] Gathering logs for kube-apiserver [47f6323d8821] ...
	I0923 04:35:40.194149   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47f6323d8821"
	I0923 04:35:40.209831   20917 logs.go:123] Gathering logs for coredns [93e3a4effe7c] ...
	I0923 04:35:40.209843   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e3a4effe7c"
	I0923 04:35:40.221529   20917 logs.go:123] Gathering logs for coredns [2461d3ccacc3] ...
	I0923 04:35:40.221545   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2461d3ccacc3"
	I0923 04:35:42.735269   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:35:47.737463   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:35:47.737659   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:35:47.756515   20917 logs.go:276] 1 containers: [47f6323d8821]
	I0923 04:35:47.756620   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:35:47.770050   20917 logs.go:276] 1 containers: [ea5ce8e910e0]
	I0923 04:35:47.770162   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:35:47.783626   20917 logs.go:276] 2 containers: [93e3a4effe7c 2461d3ccacc3]
	I0923 04:35:47.783707   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:35:47.793598   20917 logs.go:276] 1 containers: [e23bcd933edc]
	I0923 04:35:47.793683   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:35:47.804143   20917 logs.go:276] 1 containers: [9f715c76cce0]
	I0923 04:35:47.804224   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:35:47.814807   20917 logs.go:276] 1 containers: [775ac458282b]
	I0923 04:35:47.814887   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:35:47.824845   20917 logs.go:276] 0 containers: []
	W0923 04:35:47.824858   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:35:47.824928   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:35:47.835175   20917 logs.go:276] 1 containers: [261771dabdad]
	I0923 04:35:47.835192   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:35:47.835198   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:35:47.846814   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:35:47.846824   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:35:47.886963   20917 logs.go:123] Gathering logs for kube-apiserver [47f6323d8821] ...
	I0923 04:35:47.886978   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47f6323d8821"
	I0923 04:35:47.901713   20917 logs.go:123] Gathering logs for etcd [ea5ce8e910e0] ...
	I0923 04:35:47.901725   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5ce8e910e0"
	I0923 04:35:47.916608   20917 logs.go:123] Gathering logs for coredns [93e3a4effe7c] ...
	I0923 04:35:47.916617   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e3a4effe7c"
	I0923 04:35:47.932772   20917 logs.go:123] Gathering logs for coredns [2461d3ccacc3] ...
	I0923 04:35:47.932783   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2461d3ccacc3"
	I0923 04:35:47.944690   20917 logs.go:123] Gathering logs for kube-controller-manager [775ac458282b] ...
	I0923 04:35:47.944701   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 775ac458282b"
	I0923 04:35:47.961941   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:35:47.961950   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:35:47.984979   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:35:47.984986   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:35:48.018828   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:35:48.018837   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:35:48.023474   20917 logs.go:123] Gathering logs for kube-scheduler [e23bcd933edc] ...
	I0923 04:35:48.023480   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23bcd933edc"
	I0923 04:35:48.043039   20917 logs.go:123] Gathering logs for kube-proxy [9f715c76cce0] ...
	I0923 04:35:48.043054   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f715c76cce0"
	I0923 04:35:48.055147   20917 logs.go:123] Gathering logs for storage-provisioner [261771dabdad] ...
	I0923 04:35:48.055162   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261771dabdad"
	I0923 04:35:50.568467   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:35:55.569008   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:35:55.569172   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:35:55.595374   20917 logs.go:276] 1 containers: [47f6323d8821]
	I0923 04:35:55.595457   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:35:55.605751   20917 logs.go:276] 1 containers: [ea5ce8e910e0]
	I0923 04:35:55.605836   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:35:55.617226   20917 logs.go:276] 2 containers: [93e3a4effe7c 2461d3ccacc3]
	I0923 04:35:55.617310   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:35:55.627801   20917 logs.go:276] 1 containers: [e23bcd933edc]
	I0923 04:35:55.627889   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:35:55.639136   20917 logs.go:276] 1 containers: [9f715c76cce0]
	I0923 04:35:55.639224   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:35:55.649975   20917 logs.go:276] 1 containers: [775ac458282b]
	I0923 04:35:55.650056   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:35:55.660948   20917 logs.go:276] 0 containers: []
	W0923 04:35:55.660960   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:35:55.661034   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:35:55.670808   20917 logs.go:276] 1 containers: [261771dabdad]
	I0923 04:35:55.670823   20917 logs.go:123] Gathering logs for coredns [93e3a4effe7c] ...
	I0923 04:35:55.670828   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e3a4effe7c"
	I0923 04:35:55.686436   20917 logs.go:123] Gathering logs for kube-proxy [9f715c76cce0] ...
	I0923 04:35:55.686446   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f715c76cce0"
	I0923 04:35:55.698609   20917 logs.go:123] Gathering logs for kube-controller-manager [775ac458282b] ...
	I0923 04:35:55.698622   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 775ac458282b"
	I0923 04:35:55.716627   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:35:55.716640   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:35:55.741793   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:35:55.741812   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:35:55.754797   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:35:55.754807   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:35:55.794833   20917 logs.go:123] Gathering logs for etcd [ea5ce8e910e0] ...
	I0923 04:35:55.794844   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5ce8e910e0"
	I0923 04:35:55.809634   20917 logs.go:123] Gathering logs for kube-apiserver [47f6323d8821] ...
	I0923 04:35:55.809645   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47f6323d8821"
	I0923 04:35:55.824563   20917 logs.go:123] Gathering logs for coredns [2461d3ccacc3] ...
	I0923 04:35:55.824576   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2461d3ccacc3"
	I0923 04:35:55.840849   20917 logs.go:123] Gathering logs for kube-scheduler [e23bcd933edc] ...
	I0923 04:35:55.840861   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23bcd933edc"
	I0923 04:35:55.855795   20917 logs.go:123] Gathering logs for storage-provisioner [261771dabdad] ...
	I0923 04:35:55.855811   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261771dabdad"
	I0923 04:35:55.868263   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:35:55.868274   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:35:55.902420   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:35:55.902427   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:35:58.409174   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:36:03.411374   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:36:03.411532   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:36:03.429775   20917 logs.go:276] 1 containers: [47f6323d8821]
	I0923 04:36:03.429868   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:36:03.440308   20917 logs.go:276] 1 containers: [ea5ce8e910e0]
	I0923 04:36:03.440398   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:36:03.451828   20917 logs.go:276] 2 containers: [93e3a4effe7c 2461d3ccacc3]
	I0923 04:36:03.451921   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:36:03.462371   20917 logs.go:276] 1 containers: [e23bcd933edc]
	I0923 04:36:03.462455   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:36:03.474268   20917 logs.go:276] 1 containers: [9f715c76cce0]
	I0923 04:36:03.474342   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:36:03.484692   20917 logs.go:276] 1 containers: [775ac458282b]
	I0923 04:36:03.484779   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:36:03.494935   20917 logs.go:276] 0 containers: []
	W0923 04:36:03.494949   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:36:03.495019   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:36:03.505526   20917 logs.go:276] 1 containers: [261771dabdad]
	I0923 04:36:03.505540   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:36:03.505546   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:36:03.540276   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:36:03.540287   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:36:03.544827   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:36:03.544834   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:36:03.579387   20917 logs.go:123] Gathering logs for kube-apiserver [47f6323d8821] ...
	I0923 04:36:03.579398   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47f6323d8821"
	I0923 04:36:03.594151   20917 logs.go:123] Gathering logs for etcd [ea5ce8e910e0] ...
	I0923 04:36:03.594163   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5ce8e910e0"
	I0923 04:36:03.608606   20917 logs.go:123] Gathering logs for coredns [93e3a4effe7c] ...
	I0923 04:36:03.608616   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e3a4effe7c"
	I0923 04:36:03.629627   20917 logs.go:123] Gathering logs for kube-proxy [9f715c76cce0] ...
	I0923 04:36:03.629639   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f715c76cce0"
	I0923 04:36:03.641913   20917 logs.go:123] Gathering logs for storage-provisioner [261771dabdad] ...
	I0923 04:36:03.641923   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261771dabdad"
	I0923 04:36:03.653337   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:36:03.653349   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:36:03.677368   20917 logs.go:123] Gathering logs for coredns [2461d3ccacc3] ...
	I0923 04:36:03.677378   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2461d3ccacc3"
	I0923 04:36:03.690380   20917 logs.go:123] Gathering logs for kube-scheduler [e23bcd933edc] ...
	I0923 04:36:03.690391   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23bcd933edc"
	I0923 04:36:03.705233   20917 logs.go:123] Gathering logs for kube-controller-manager [775ac458282b] ...
	I0923 04:36:03.705243   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 775ac458282b"
	I0923 04:36:03.724488   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:36:03.724499   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:36:06.237782   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:36:11.240026   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:36:11.240165   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:36:11.250786   20917 logs.go:276] 1 containers: [47f6323d8821]
	I0923 04:36:11.250872   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:36:11.261500   20917 logs.go:276] 1 containers: [ea5ce8e910e0]
	I0923 04:36:11.261583   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:36:11.273161   20917 logs.go:276] 2 containers: [93e3a4effe7c 2461d3ccacc3]
	I0923 04:36:11.273245   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:36:11.283575   20917 logs.go:276] 1 containers: [e23bcd933edc]
	I0923 04:36:11.283654   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:36:11.294334   20917 logs.go:276] 1 containers: [9f715c76cce0]
	I0923 04:36:11.294410   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:36:11.308310   20917 logs.go:276] 1 containers: [775ac458282b]
	I0923 04:36:11.308388   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:36:11.319178   20917 logs.go:276] 0 containers: []
	W0923 04:36:11.319194   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:36:11.319258   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:36:11.330801   20917 logs.go:276] 1 containers: [261771dabdad]
	I0923 04:36:11.330819   20917 logs.go:123] Gathering logs for kube-apiserver [47f6323d8821] ...
	I0923 04:36:11.330824   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47f6323d8821"
	I0923 04:36:11.347844   20917 logs.go:123] Gathering logs for etcd [ea5ce8e910e0] ...
	I0923 04:36:11.347858   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5ce8e910e0"
	I0923 04:36:11.361504   20917 logs.go:123] Gathering logs for coredns [93e3a4effe7c] ...
	I0923 04:36:11.361518   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e3a4effe7c"
	I0923 04:36:11.377368   20917 logs.go:123] Gathering logs for kube-proxy [9f715c76cce0] ...
	I0923 04:36:11.377379   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f715c76cce0"
	I0923 04:36:11.394418   20917 logs.go:123] Gathering logs for storage-provisioner [261771dabdad] ...
	I0923 04:36:11.394431   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261771dabdad"
	I0923 04:36:11.405567   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:36:11.405577   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:36:11.418487   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:36:11.418497   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:36:11.423545   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:36:11.423553   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:36:11.458099   20917 logs.go:123] Gathering logs for coredns [2461d3ccacc3] ...
	I0923 04:36:11.458113   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2461d3ccacc3"
	I0923 04:36:11.470077   20917 logs.go:123] Gathering logs for kube-scheduler [e23bcd933edc] ...
	I0923 04:36:11.470088   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23bcd933edc"
	I0923 04:36:11.484842   20917 logs.go:123] Gathering logs for kube-controller-manager [775ac458282b] ...
	I0923 04:36:11.484856   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 775ac458282b"
	I0923 04:36:11.507530   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:36:11.507543   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:36:11.531635   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:36:11.531652   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:36:14.068810   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:36:19.069707   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:36:19.069876   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:36:19.083293   20917 logs.go:276] 1 containers: [47f6323d8821]
	I0923 04:36:19.083386   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:36:19.094390   20917 logs.go:276] 1 containers: [ea5ce8e910e0]
	I0923 04:36:19.094462   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:36:19.104615   20917 logs.go:276] 2 containers: [93e3a4effe7c 2461d3ccacc3]
	I0923 04:36:19.104697   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:36:19.115223   20917 logs.go:276] 1 containers: [e23bcd933edc]
	I0923 04:36:19.115310   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:36:19.126044   20917 logs.go:276] 1 containers: [9f715c76cce0]
	I0923 04:36:19.126115   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:36:19.137427   20917 logs.go:276] 1 containers: [775ac458282b]
	I0923 04:36:19.137505   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:36:19.148071   20917 logs.go:276] 0 containers: []
	W0923 04:36:19.148082   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:36:19.148152   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:36:19.158944   20917 logs.go:276] 1 containers: [261771dabdad]
	I0923 04:36:19.158964   20917 logs.go:123] Gathering logs for coredns [2461d3ccacc3] ...
	I0923 04:36:19.158969   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2461d3ccacc3"
	I0923 04:36:19.171092   20917 logs.go:123] Gathering logs for kube-scheduler [e23bcd933edc] ...
	I0923 04:36:19.171103   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23bcd933edc"
	I0923 04:36:19.185853   20917 logs.go:123] Gathering logs for kube-controller-manager [775ac458282b] ...
	I0923 04:36:19.185866   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 775ac458282b"
	I0923 04:36:19.208368   20917 logs.go:123] Gathering logs for storage-provisioner [261771dabdad] ...
	I0923 04:36:19.208381   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261771dabdad"
	I0923 04:36:19.220347   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:36:19.220358   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:36:19.244926   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:36:19.244934   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:36:19.279815   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:36:19.279824   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:36:19.316894   20917 logs.go:123] Gathering logs for etcd [ea5ce8e910e0] ...
	I0923 04:36:19.316910   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5ce8e910e0"
	I0923 04:36:19.335748   20917 logs.go:123] Gathering logs for kube-proxy [9f715c76cce0] ...
	I0923 04:36:19.335758   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f715c76cce0"
	I0923 04:36:19.347870   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:36:19.347883   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:36:19.360821   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:36:19.360832   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:36:19.365173   20917 logs.go:123] Gathering logs for kube-apiserver [47f6323d8821] ...
	I0923 04:36:19.365182   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47f6323d8821"
	I0923 04:36:19.379743   20917 logs.go:123] Gathering logs for coredns [93e3a4effe7c] ...
	I0923 04:36:19.379753   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e3a4effe7c"
	I0923 04:36:21.893538   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:36:26.895760   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:36:26.896030   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:36:26.913995   20917 logs.go:276] 1 containers: [47f6323d8821]
	I0923 04:36:26.914102   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:36:26.928864   20917 logs.go:276] 1 containers: [ea5ce8e910e0]
	I0923 04:36:26.928958   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:36:26.939960   20917 logs.go:276] 2 containers: [93e3a4effe7c 2461d3ccacc3]
	I0923 04:36:26.940047   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:36:26.950496   20917 logs.go:276] 1 containers: [e23bcd933edc]
	I0923 04:36:26.950570   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:36:26.960530   20917 logs.go:276] 1 containers: [9f715c76cce0]
	I0923 04:36:26.960613   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:36:26.970521   20917 logs.go:276] 1 containers: [775ac458282b]
	I0923 04:36:26.970594   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:36:26.980891   20917 logs.go:276] 0 containers: []
	W0923 04:36:26.980902   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:36:26.980975   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:36:26.991468   20917 logs.go:276] 1 containers: [261771dabdad]
	I0923 04:36:26.991484   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:36:26.991491   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:36:27.027896   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:36:27.027904   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:36:27.032736   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:36:27.032747   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:36:27.066761   20917 logs.go:123] Gathering logs for kube-apiserver [47f6323d8821] ...
	I0923 04:36:27.066774   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47f6323d8821"
	I0923 04:36:27.081433   20917 logs.go:123] Gathering logs for coredns [2461d3ccacc3] ...
	I0923 04:36:27.081446   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2461d3ccacc3"
	I0923 04:36:27.093088   20917 logs.go:123] Gathering logs for kube-scheduler [e23bcd933edc] ...
	I0923 04:36:27.093102   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23bcd933edc"
	I0923 04:36:27.108090   20917 logs.go:123] Gathering logs for kube-proxy [9f715c76cce0] ...
	I0923 04:36:27.108104   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f715c76cce0"
	I0923 04:36:27.119967   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:36:27.119977   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:36:27.144174   20917 logs.go:123] Gathering logs for etcd [ea5ce8e910e0] ...
	I0923 04:36:27.144181   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5ce8e910e0"
	I0923 04:36:27.158141   20917 logs.go:123] Gathering logs for coredns [93e3a4effe7c] ...
	I0923 04:36:27.158154   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e3a4effe7c"
	I0923 04:36:27.169424   20917 logs.go:123] Gathering logs for kube-controller-manager [775ac458282b] ...
	I0923 04:36:27.169437   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 775ac458282b"
	I0923 04:36:27.187512   20917 logs.go:123] Gathering logs for storage-provisioner [261771dabdad] ...
	I0923 04:36:27.187525   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261771dabdad"
	I0923 04:36:27.199492   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:36:27.199503   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:36:29.713197   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:36:34.715365   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:36:34.715548   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:36:34.733537   20917 logs.go:276] 1 containers: [47f6323d8821]
	I0923 04:36:34.733636   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:36:34.744507   20917 logs.go:276] 1 containers: [ea5ce8e910e0]
	I0923 04:36:34.744584   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:36:34.754682   20917 logs.go:276] 2 containers: [93e3a4effe7c 2461d3ccacc3]
	I0923 04:36:34.754767   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:36:34.764569   20917 logs.go:276] 1 containers: [e23bcd933edc]
	I0923 04:36:34.764645   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:36:34.775309   20917 logs.go:276] 1 containers: [9f715c76cce0]
	I0923 04:36:34.775395   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:36:34.786352   20917 logs.go:276] 1 containers: [775ac458282b]
	I0923 04:36:34.786430   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:36:34.796359   20917 logs.go:276] 0 containers: []
	W0923 04:36:34.796371   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:36:34.796446   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:36:34.806961   20917 logs.go:276] 1 containers: [261771dabdad]
	I0923 04:36:34.806981   20917 logs.go:123] Gathering logs for kube-scheduler [e23bcd933edc] ...
	I0923 04:36:34.806986   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23bcd933edc"
	I0923 04:36:34.822241   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:36:34.822258   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:36:34.847302   20917 logs.go:123] Gathering logs for kube-apiserver [47f6323d8821] ...
	I0923 04:36:34.847309   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47f6323d8821"
	I0923 04:36:34.866054   20917 logs.go:123] Gathering logs for etcd [ea5ce8e910e0] ...
	I0923 04:36:34.866066   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5ce8e910e0"
	I0923 04:36:34.880312   20917 logs.go:123] Gathering logs for coredns [2461d3ccacc3] ...
	I0923 04:36:34.880324   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2461d3ccacc3"
	I0923 04:36:34.892263   20917 logs.go:123] Gathering logs for coredns [93e3a4effe7c] ...
	I0923 04:36:34.892280   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e3a4effe7c"
	I0923 04:36:34.904172   20917 logs.go:123] Gathering logs for kube-proxy [9f715c76cce0] ...
	I0923 04:36:34.904183   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f715c76cce0"
	I0923 04:36:34.917098   20917 logs.go:123] Gathering logs for kube-controller-manager [775ac458282b] ...
	I0923 04:36:34.917114   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 775ac458282b"
	I0923 04:36:34.935084   20917 logs.go:123] Gathering logs for storage-provisioner [261771dabdad] ...
	I0923 04:36:34.935098   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261771dabdad"
	I0923 04:36:34.947193   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:36:34.947202   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:36:34.958821   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:36:34.958831   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:36:34.994445   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:36:34.994453   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:36:34.999294   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:36:34.999304   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:36:37.534674   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:36:42.536979   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:36:42.537245   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:36:42.559170   20917 logs.go:276] 1 containers: [47f6323d8821]
	I0923 04:36:42.559292   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:36:42.574443   20917 logs.go:276] 1 containers: [ea5ce8e910e0]
	I0923 04:36:42.574546   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:36:42.586749   20917 logs.go:276] 2 containers: [93e3a4effe7c 2461d3ccacc3]
	I0923 04:36:42.586834   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:36:42.597836   20917 logs.go:276] 1 containers: [e23bcd933edc]
	I0923 04:36:42.597919   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:36:42.612480   20917 logs.go:276] 1 containers: [9f715c76cce0]
	I0923 04:36:42.612564   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:36:42.631609   20917 logs.go:276] 1 containers: [775ac458282b]
	I0923 04:36:42.631698   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:36:42.642093   20917 logs.go:276] 0 containers: []
	W0923 04:36:42.642106   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:36:42.642173   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:36:42.653112   20917 logs.go:276] 1 containers: [261771dabdad]
	I0923 04:36:42.653128   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:36:42.653133   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:36:42.664329   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:36:42.664342   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:36:42.699942   20917 logs.go:123] Gathering logs for coredns [2461d3ccacc3] ...
	I0923 04:36:42.699954   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2461d3ccacc3"
	I0923 04:36:42.711985   20917 logs.go:123] Gathering logs for kube-scheduler [e23bcd933edc] ...
	I0923 04:36:42.711997   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23bcd933edc"
	I0923 04:36:42.727753   20917 logs.go:123] Gathering logs for kube-proxy [9f715c76cce0] ...
	I0923 04:36:42.727763   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f715c76cce0"
	I0923 04:36:42.739277   20917 logs.go:123] Gathering logs for coredns [93e3a4effe7c] ...
	I0923 04:36:42.739292   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e3a4effe7c"
	I0923 04:36:42.750869   20917 logs.go:123] Gathering logs for kube-controller-manager [775ac458282b] ...
	I0923 04:36:42.750878   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 775ac458282b"
	I0923 04:36:42.768785   20917 logs.go:123] Gathering logs for storage-provisioner [261771dabdad] ...
	I0923 04:36:42.768796   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261771dabdad"
	I0923 04:36:42.780421   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:36:42.780432   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:36:42.804855   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:36:42.804862   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:36:42.841254   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:36:42.841262   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:36:42.845472   20917 logs.go:123] Gathering logs for kube-apiserver [47f6323d8821] ...
	I0923 04:36:42.845481   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47f6323d8821"
	I0923 04:36:42.859543   20917 logs.go:123] Gathering logs for etcd [ea5ce8e910e0] ...
	I0923 04:36:42.859552   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5ce8e910e0"
	I0923 04:36:45.375815   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:36:50.377997   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:36:50.378251   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:36:50.399831   20917 logs.go:276] 1 containers: [47f6323d8821]
	I0923 04:36:50.399962   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:36:50.414962   20917 logs.go:276] 1 containers: [ea5ce8e910e0]
	I0923 04:36:50.415045   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:36:50.432342   20917 logs.go:276] 4 containers: [1724f5ef67dd c5235c1fc6df 93e3a4effe7c 2461d3ccacc3]
	I0923 04:36:50.432420   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:36:50.445205   20917 logs.go:276] 1 containers: [e23bcd933edc]
	I0923 04:36:50.445290   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:36:50.458257   20917 logs.go:276] 1 containers: [9f715c76cce0]
	I0923 04:36:50.458343   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:36:50.468364   20917 logs.go:276] 1 containers: [775ac458282b]
	I0923 04:36:50.468449   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:36:50.485796   20917 logs.go:276] 0 containers: []
	W0923 04:36:50.485809   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:36:50.485884   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:36:50.504807   20917 logs.go:276] 1 containers: [261771dabdad]
	I0923 04:36:50.504824   20917 logs.go:123] Gathering logs for kube-controller-manager [775ac458282b] ...
	I0923 04:36:50.504830   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 775ac458282b"
	I0923 04:36:50.523663   20917 logs.go:123] Gathering logs for storage-provisioner [261771dabdad] ...
	I0923 04:36:50.523674   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261771dabdad"
	I0923 04:36:50.538226   20917 logs.go:123] Gathering logs for etcd [ea5ce8e910e0] ...
	I0923 04:36:50.538237   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5ce8e910e0"
	I0923 04:36:50.552067   20917 logs.go:123] Gathering logs for coredns [2461d3ccacc3] ...
	I0923 04:36:50.552077   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2461d3ccacc3"
	I0923 04:36:50.568953   20917 logs.go:123] Gathering logs for kube-proxy [9f715c76cce0] ...
	I0923 04:36:50.568965   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f715c76cce0"
	I0923 04:36:50.580955   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:36:50.580965   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:36:50.604430   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:36:50.604440   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:36:50.608904   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:36:50.608912   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:36:50.643191   20917 logs.go:123] Gathering logs for kube-scheduler [e23bcd933edc] ...
	I0923 04:36:50.643204   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23bcd933edc"
	I0923 04:36:50.657867   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:36:50.657877   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:36:50.669862   20917 logs.go:123] Gathering logs for kube-apiserver [47f6323d8821] ...
	I0923 04:36:50.669873   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47f6323d8821"
	I0923 04:36:50.684831   20917 logs.go:123] Gathering logs for coredns [93e3a4effe7c] ...
	I0923 04:36:50.684842   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e3a4effe7c"
	I0923 04:36:50.696655   20917 logs.go:123] Gathering logs for coredns [c5235c1fc6df] ...
	I0923 04:36:50.696666   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5235c1fc6df"
	I0923 04:36:50.708168   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:36:50.708179   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:36:50.744012   20917 logs.go:123] Gathering logs for coredns [1724f5ef67dd] ...
	I0923 04:36:50.744022   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1724f5ef67dd"
	I0923 04:36:53.256966   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:36:58.259234   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:36:58.259410   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:36:58.270389   20917 logs.go:276] 1 containers: [47f6323d8821]
	I0923 04:36:58.270480   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:36:58.281453   20917 logs.go:276] 1 containers: [ea5ce8e910e0]
	I0923 04:36:58.281543   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:36:58.291979   20917 logs.go:276] 4 containers: [1724f5ef67dd c5235c1fc6df 93e3a4effe7c 2461d3ccacc3]
	I0923 04:36:58.292056   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:36:58.302291   20917 logs.go:276] 1 containers: [e23bcd933edc]
	I0923 04:36:58.302362   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:36:58.312479   20917 logs.go:276] 1 containers: [9f715c76cce0]
	I0923 04:36:58.312569   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:36:58.323327   20917 logs.go:276] 1 containers: [775ac458282b]
	I0923 04:36:58.323404   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:36:58.333984   20917 logs.go:276] 0 containers: []
	W0923 04:36:58.333997   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:36:58.334073   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:36:58.344719   20917 logs.go:276] 1 containers: [261771dabdad]
	I0923 04:36:58.344739   20917 logs.go:123] Gathering logs for kube-apiserver [47f6323d8821] ...
	I0923 04:36:58.344745   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47f6323d8821"
	I0923 04:36:58.358876   20917 logs.go:123] Gathering logs for kube-scheduler [e23bcd933edc] ...
	I0923 04:36:58.358886   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23bcd933edc"
	I0923 04:36:58.374106   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:36:58.374118   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:36:58.408135   20917 logs.go:123] Gathering logs for storage-provisioner [261771dabdad] ...
	I0923 04:36:58.408143   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261771dabdad"
	I0923 04:36:58.423941   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:36:58.423952   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:36:58.449659   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:36:58.449671   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:36:58.461540   20917 logs.go:123] Gathering logs for etcd [ea5ce8e910e0] ...
	I0923 04:36:58.461555   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5ce8e910e0"
	I0923 04:36:58.475525   20917 logs.go:123] Gathering logs for coredns [c5235c1fc6df] ...
	I0923 04:36:58.475542   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5235c1fc6df"
	I0923 04:36:58.487182   20917 logs.go:123] Gathering logs for kube-controller-manager [775ac458282b] ...
	I0923 04:36:58.487196   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 775ac458282b"
	I0923 04:36:58.504551   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:36:58.504560   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:36:58.511684   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:36:58.511700   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:36:58.549594   20917 logs.go:123] Gathering logs for coredns [1724f5ef67dd] ...
	I0923 04:36:58.549610   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1724f5ef67dd"
	I0923 04:36:58.560929   20917 logs.go:123] Gathering logs for coredns [93e3a4effe7c] ...
	I0923 04:36:58.560939   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e3a4effe7c"
	I0923 04:36:58.572594   20917 logs.go:123] Gathering logs for coredns [2461d3ccacc3] ...
	I0923 04:36:58.572606   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2461d3ccacc3"
	I0923 04:36:58.588018   20917 logs.go:123] Gathering logs for kube-proxy [9f715c76cce0] ...
	I0923 04:36:58.588034   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f715c76cce0"
	I0923 04:37:01.101545   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:37:06.103849   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:37:06.104048   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:37:06.118501   20917 logs.go:276] 1 containers: [47f6323d8821]
	I0923 04:37:06.118590   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:37:06.129386   20917 logs.go:276] 1 containers: [ea5ce8e910e0]
	I0923 04:37:06.129468   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:37:06.139851   20917 logs.go:276] 4 containers: [1724f5ef67dd c5235c1fc6df 93e3a4effe7c 2461d3ccacc3]
	I0923 04:37:06.139934   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:37:06.150591   20917 logs.go:276] 1 containers: [e23bcd933edc]
	I0923 04:37:06.150673   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:37:06.161424   20917 logs.go:276] 1 containers: [9f715c76cce0]
	I0923 04:37:06.161505   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:37:06.171548   20917 logs.go:276] 1 containers: [775ac458282b]
	I0923 04:37:06.171620   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:37:06.181837   20917 logs.go:276] 0 containers: []
	W0923 04:37:06.181848   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:37:06.181919   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:37:06.194137   20917 logs.go:276] 1 containers: [261771dabdad]
	I0923 04:37:06.194156   20917 logs.go:123] Gathering logs for kube-apiserver [47f6323d8821] ...
	I0923 04:37:06.194162   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47f6323d8821"
	I0923 04:37:06.208590   20917 logs.go:123] Gathering logs for kube-scheduler [e23bcd933edc] ...
	I0923 04:37:06.208603   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23bcd933edc"
	I0923 04:37:06.224305   20917 logs.go:123] Gathering logs for coredns [2461d3ccacc3] ...
	I0923 04:37:06.224318   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2461d3ccacc3"
	I0923 04:37:06.239766   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:37:06.239780   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:37:06.275650   20917 logs.go:123] Gathering logs for coredns [c5235c1fc6df] ...
	I0923 04:37:06.275666   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5235c1fc6df"
	I0923 04:37:06.287330   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:37:06.287339   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:37:06.299563   20917 logs.go:123] Gathering logs for etcd [ea5ce8e910e0] ...
	I0923 04:37:06.299576   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5ce8e910e0"
	I0923 04:37:06.313722   20917 logs.go:123] Gathering logs for coredns [1724f5ef67dd] ...
	I0923 04:37:06.313736   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1724f5ef67dd"
	I0923 04:37:06.325009   20917 logs.go:123] Gathering logs for coredns [93e3a4effe7c] ...
	I0923 04:37:06.325022   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e3a4effe7c"
	I0923 04:37:06.337092   20917 logs.go:123] Gathering logs for kube-proxy [9f715c76cce0] ...
	I0923 04:37:06.337104   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f715c76cce0"
	I0923 04:37:06.348810   20917 logs.go:123] Gathering logs for kube-controller-manager [775ac458282b] ...
	I0923 04:37:06.348824   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 775ac458282b"
	I0923 04:37:06.370896   20917 logs.go:123] Gathering logs for storage-provisioner [261771dabdad] ...
	I0923 04:37:06.370910   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261771dabdad"
	I0923 04:37:06.382990   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:37:06.382999   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:37:06.418891   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:37:06.418899   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:37:06.424005   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:37:06.424011   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:37:08.948323   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:37:13.950523   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:37:13.950698   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:37:13.962973   20917 logs.go:276] 1 containers: [47f6323d8821]
	I0923 04:37:13.963065   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:37:13.973769   20917 logs.go:276] 1 containers: [ea5ce8e910e0]
	I0923 04:37:13.973844   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:37:13.985143   20917 logs.go:276] 4 containers: [1724f5ef67dd c5235c1fc6df 93e3a4effe7c 2461d3ccacc3]
	I0923 04:37:13.985235   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:37:13.996023   20917 logs.go:276] 1 containers: [e23bcd933edc]
	I0923 04:37:13.996113   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:37:14.007274   20917 logs.go:276] 1 containers: [9f715c76cce0]
	I0923 04:37:14.007355   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:37:14.018121   20917 logs.go:276] 1 containers: [775ac458282b]
	I0923 04:37:14.018202   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:37:14.028187   20917 logs.go:276] 0 containers: []
	W0923 04:37:14.028199   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:37:14.028272   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:37:14.044346   20917 logs.go:276] 1 containers: [261771dabdad]
	I0923 04:37:14.044369   20917 logs.go:123] Gathering logs for coredns [93e3a4effe7c] ...
	I0923 04:37:14.044375   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e3a4effe7c"
	I0923 04:37:14.056701   20917 logs.go:123] Gathering logs for coredns [2461d3ccacc3] ...
	I0923 04:37:14.056714   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2461d3ccacc3"
	I0923 04:37:14.069005   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:37:14.069021   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:37:14.103530   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:37:14.103539   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:37:14.138311   20917 logs.go:123] Gathering logs for kube-apiserver [47f6323d8821] ...
	I0923 04:37:14.138321   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47f6323d8821"
	I0923 04:37:14.152451   20917 logs.go:123] Gathering logs for etcd [ea5ce8e910e0] ...
	I0923 04:37:14.152462   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5ce8e910e0"
	I0923 04:37:14.169107   20917 logs.go:123] Gathering logs for coredns [1724f5ef67dd] ...
	I0923 04:37:14.169119   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1724f5ef67dd"
	I0923 04:37:14.181116   20917 logs.go:123] Gathering logs for coredns [c5235c1fc6df] ...
	I0923 04:37:14.181134   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5235c1fc6df"
	I0923 04:37:14.192715   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:37:14.192726   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:37:14.197446   20917 logs.go:123] Gathering logs for kube-scheduler [e23bcd933edc] ...
	I0923 04:37:14.197453   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23bcd933edc"
	I0923 04:37:14.212664   20917 logs.go:123] Gathering logs for kube-controller-manager [775ac458282b] ...
	I0923 04:37:14.212676   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 775ac458282b"
	I0923 04:37:14.230350   20917 logs.go:123] Gathering logs for storage-provisioner [261771dabdad] ...
	I0923 04:37:14.230363   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261771dabdad"
	I0923 04:37:14.241896   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:37:14.241906   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:37:14.253336   20917 logs.go:123] Gathering logs for kube-proxy [9f715c76cce0] ...
	I0923 04:37:14.253350   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f715c76cce0"
	I0923 04:37:14.265079   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:37:14.265093   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:37:16.793057   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:37:21.795371   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:37:21.795579   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:37:21.811252   20917 logs.go:276] 1 containers: [47f6323d8821]
	I0923 04:37:21.811346   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:37:21.823654   20917 logs.go:276] 1 containers: [ea5ce8e910e0]
	I0923 04:37:21.823733   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:37:21.835146   20917 logs.go:276] 4 containers: [1724f5ef67dd c5235c1fc6df 93e3a4effe7c 2461d3ccacc3]
	I0923 04:37:21.835230   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:37:21.845416   20917 logs.go:276] 1 containers: [e23bcd933edc]
	I0923 04:37:21.845498   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:37:21.856287   20917 logs.go:276] 1 containers: [9f715c76cce0]
	I0923 04:37:21.856366   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:37:21.867176   20917 logs.go:276] 1 containers: [775ac458282b]
	I0923 04:37:21.867269   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:37:21.877954   20917 logs.go:276] 0 containers: []
	W0923 04:37:21.877965   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:37:21.878036   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:37:21.888545   20917 logs.go:276] 1 containers: [261771dabdad]
	I0923 04:37:21.888561   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:37:21.888566   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:37:21.924526   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:37:21.924536   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:37:21.929064   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:37:21.929071   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:37:21.964545   20917 logs.go:123] Gathering logs for coredns [2461d3ccacc3] ...
	I0923 04:37:21.964558   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2461d3ccacc3"
	I0923 04:37:21.978056   20917 logs.go:123] Gathering logs for coredns [1724f5ef67dd] ...
	I0923 04:37:21.978072   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1724f5ef67dd"
	I0923 04:37:21.991921   20917 logs.go:123] Gathering logs for kube-apiserver [47f6323d8821] ...
	I0923 04:37:21.991932   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47f6323d8821"
	I0923 04:37:22.006119   20917 logs.go:123] Gathering logs for kube-scheduler [e23bcd933edc] ...
	I0923 04:37:22.006132   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23bcd933edc"
	I0923 04:37:22.021303   20917 logs.go:123] Gathering logs for storage-provisioner [261771dabdad] ...
	I0923 04:37:22.021318   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261771dabdad"
	I0923 04:37:22.033906   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:37:22.033917   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:37:22.058743   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:37:22.058756   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:37:22.073835   20917 logs.go:123] Gathering logs for etcd [ea5ce8e910e0] ...
	I0923 04:37:22.073848   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5ce8e910e0"
	I0923 04:37:22.087737   20917 logs.go:123] Gathering logs for coredns [c5235c1fc6df] ...
	I0923 04:37:22.087747   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5235c1fc6df"
	I0923 04:37:22.099364   20917 logs.go:123] Gathering logs for coredns [93e3a4effe7c] ...
	I0923 04:37:22.099373   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e3a4effe7c"
	I0923 04:37:22.110759   20917 logs.go:123] Gathering logs for kube-proxy [9f715c76cce0] ...
	I0923 04:37:22.110771   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f715c76cce0"
	I0923 04:37:22.122233   20917 logs.go:123] Gathering logs for kube-controller-manager [775ac458282b] ...
	I0923 04:37:22.122243   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 775ac458282b"
	I0923 04:37:24.642363   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:37:29.644529   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:37:29.644734   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:37:29.660461   20917 logs.go:276] 1 containers: [47f6323d8821]
	I0923 04:37:29.660562   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:37:29.673083   20917 logs.go:276] 1 containers: [ea5ce8e910e0]
	I0923 04:37:29.673161   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:37:29.689506   20917 logs.go:276] 4 containers: [1724f5ef67dd c5235c1fc6df 93e3a4effe7c 2461d3ccacc3]
	I0923 04:37:29.689599   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:37:29.700883   20917 logs.go:276] 1 containers: [e23bcd933edc]
	I0923 04:37:29.700971   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:37:29.712204   20917 logs.go:276] 1 containers: [9f715c76cce0]
	I0923 04:37:29.712290   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:37:29.724593   20917 logs.go:276] 1 containers: [775ac458282b]
	I0923 04:37:29.724676   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:37:29.734956   20917 logs.go:276] 0 containers: []
	W0923 04:37:29.734968   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:37:29.735039   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:37:29.745396   20917 logs.go:276] 1 containers: [261771dabdad]
	I0923 04:37:29.745414   20917 logs.go:123] Gathering logs for coredns [93e3a4effe7c] ...
	I0923 04:37:29.745420   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e3a4effe7c"
	I0923 04:37:29.756582   20917 logs.go:123] Gathering logs for kube-proxy [9f715c76cce0] ...
	I0923 04:37:29.756591   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f715c76cce0"
	I0923 04:37:29.767989   20917 logs.go:123] Gathering logs for kube-controller-manager [775ac458282b] ...
	I0923 04:37:29.768000   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 775ac458282b"
	I0923 04:37:29.787136   20917 logs.go:123] Gathering logs for etcd [ea5ce8e910e0] ...
	I0923 04:37:29.787147   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5ce8e910e0"
	I0923 04:37:29.802809   20917 logs.go:123] Gathering logs for storage-provisioner [261771dabdad] ...
	I0923 04:37:29.802819   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261771dabdad"
	I0923 04:37:29.814072   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:37:29.814088   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:37:29.839864   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:37:29.839875   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:37:29.844465   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:37:29.844474   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:37:29.884472   20917 logs.go:123] Gathering logs for coredns [1724f5ef67dd] ...
	I0923 04:37:29.884483   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1724f5ef67dd"
	I0923 04:37:29.896283   20917 logs.go:123] Gathering logs for coredns [c5235c1fc6df] ...
	I0923 04:37:29.896292   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5235c1fc6df"
	I0923 04:37:29.910465   20917 logs.go:123] Gathering logs for coredns [2461d3ccacc3] ...
	I0923 04:37:29.910474   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2461d3ccacc3"
	I0923 04:37:29.922474   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:37:29.922484   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:37:29.956720   20917 logs.go:123] Gathering logs for kube-apiserver [47f6323d8821] ...
	I0923 04:37:29.956728   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47f6323d8821"
	I0923 04:37:29.975266   20917 logs.go:123] Gathering logs for kube-scheduler [e23bcd933edc] ...
	I0923 04:37:29.975277   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23bcd933edc"
	I0923 04:37:29.991142   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:37:29.991152   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:37:32.527417   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:37:37.529656   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:37:37.530029   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:37:37.556713   20917 logs.go:276] 1 containers: [47f6323d8821]
	I0923 04:37:37.556862   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:37:37.574816   20917 logs.go:276] 1 containers: [ea5ce8e910e0]
	I0923 04:37:37.574912   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:37:37.588215   20917 logs.go:276] 4 containers: [1724f5ef67dd c5235c1fc6df 93e3a4effe7c 2461d3ccacc3]
	I0923 04:37:37.588304   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:37:37.599536   20917 logs.go:276] 1 containers: [e23bcd933edc]
	I0923 04:37:37.599621   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:37:37.610263   20917 logs.go:276] 1 containers: [9f715c76cce0]
	I0923 04:37:37.610341   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:37:37.620755   20917 logs.go:276] 1 containers: [775ac458282b]
	I0923 04:37:37.620838   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:37:37.630948   20917 logs.go:276] 0 containers: []
	W0923 04:37:37.630959   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:37:37.631034   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:37:37.641222   20917 logs.go:276] 1 containers: [261771dabdad]
	I0923 04:37:37.641240   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:37:37.641245   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:37:37.675673   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:37:37.675684   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:37:37.680874   20917 logs.go:123] Gathering logs for etcd [ea5ce8e910e0] ...
	I0923 04:37:37.680882   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5ce8e910e0"
	I0923 04:37:37.694731   20917 logs.go:123] Gathering logs for coredns [1724f5ef67dd] ...
	I0923 04:37:37.694744   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1724f5ef67dd"
	I0923 04:37:37.706464   20917 logs.go:123] Gathering logs for coredns [c5235c1fc6df] ...
	I0923 04:37:37.706475   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5235c1fc6df"
	I0923 04:37:37.723055   20917 logs.go:123] Gathering logs for coredns [93e3a4effe7c] ...
	I0923 04:37:37.723066   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e3a4effe7c"
	I0923 04:37:37.736856   20917 logs.go:123] Gathering logs for kube-scheduler [e23bcd933edc] ...
	I0923 04:37:37.736868   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23bcd933edc"
	I0923 04:37:37.752529   20917 logs.go:123] Gathering logs for kube-controller-manager [775ac458282b] ...
	I0923 04:37:37.752544   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 775ac458282b"
	I0923 04:37:37.770239   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:37:37.770254   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:37:37.788908   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:37:37.788918   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:37:37.825410   20917 logs.go:123] Gathering logs for kube-apiserver [47f6323d8821] ...
	I0923 04:37:37.825418   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47f6323d8821"
	I0923 04:37:37.839717   20917 logs.go:123] Gathering logs for coredns [2461d3ccacc3] ...
	I0923 04:37:37.839731   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2461d3ccacc3"
	I0923 04:37:37.852231   20917 logs.go:123] Gathering logs for kube-proxy [9f715c76cce0] ...
	I0923 04:37:37.852243   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f715c76cce0"
	I0923 04:37:37.864194   20917 logs.go:123] Gathering logs for storage-provisioner [261771dabdad] ...
	I0923 04:37:37.864206   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261771dabdad"
	I0923 04:37:37.876404   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:37:37.876422   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:37:40.403181   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:37:45.405430   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:37:45.405600   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:37:45.416638   20917 logs.go:276] 1 containers: [47f6323d8821]
	I0923 04:37:45.416731   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:37:45.427328   20917 logs.go:276] 1 containers: [ea5ce8e910e0]
	I0923 04:37:45.427416   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:37:45.437836   20917 logs.go:276] 4 containers: [1724f5ef67dd c5235c1fc6df 93e3a4effe7c 2461d3ccacc3]
	I0923 04:37:45.437919   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:37:45.450869   20917 logs.go:276] 1 containers: [e23bcd933edc]
	I0923 04:37:45.450949   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:37:45.460874   20917 logs.go:276] 1 containers: [9f715c76cce0]
	I0923 04:37:45.460958   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:37:45.471687   20917 logs.go:276] 1 containers: [775ac458282b]
	I0923 04:37:45.471770   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:37:45.482580   20917 logs.go:276] 0 containers: []
	W0923 04:37:45.482592   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:37:45.482661   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:37:45.492988   20917 logs.go:276] 1 containers: [261771dabdad]
	I0923 04:37:45.493009   20917 logs.go:123] Gathering logs for coredns [93e3a4effe7c] ...
	I0923 04:37:45.493014   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e3a4effe7c"
	I0923 04:37:45.509774   20917 logs.go:123] Gathering logs for kube-proxy [9f715c76cce0] ...
	I0923 04:37:45.509785   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f715c76cce0"
	I0923 04:37:45.521718   20917 logs.go:123] Gathering logs for storage-provisioner [261771dabdad] ...
	I0923 04:37:45.521728   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261771dabdad"
	I0923 04:37:45.533496   20917 logs.go:123] Gathering logs for kube-apiserver [47f6323d8821] ...
	I0923 04:37:45.533506   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47f6323d8821"
	I0923 04:37:45.547935   20917 logs.go:123] Gathering logs for coredns [1724f5ef67dd] ...
	I0923 04:37:45.547946   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1724f5ef67dd"
	I0923 04:37:45.559413   20917 logs.go:123] Gathering logs for coredns [2461d3ccacc3] ...
	I0923 04:37:45.559425   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2461d3ccacc3"
	I0923 04:37:45.571775   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:37:45.571787   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:37:45.606537   20917 logs.go:123] Gathering logs for kube-scheduler [e23bcd933edc] ...
	I0923 04:37:45.606545   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23bcd933edc"
	I0923 04:37:45.621471   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:37:45.621482   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:37:45.633158   20917 logs.go:123] Gathering logs for etcd [ea5ce8e910e0] ...
	I0923 04:37:45.633169   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5ce8e910e0"
	I0923 04:37:45.650990   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:37:45.651004   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:37:45.686652   20917 logs.go:123] Gathering logs for coredns [c5235c1fc6df] ...
	I0923 04:37:45.686662   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5235c1fc6df"
	I0923 04:37:45.700442   20917 logs.go:123] Gathering logs for kube-controller-manager [775ac458282b] ...
	I0923 04:37:45.700454   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 775ac458282b"
	I0923 04:37:45.727330   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:37:45.727344   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:37:45.751384   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:37:45.751393   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:37:48.258417   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:37:53.260672   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:37:53.260892   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:37:53.281242   20917 logs.go:276] 1 containers: [47f6323d8821]
	I0923 04:37:53.281369   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:37:53.296356   20917 logs.go:276] 1 containers: [ea5ce8e910e0]
	I0923 04:37:53.296438   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:37:53.308504   20917 logs.go:276] 4 containers: [1724f5ef67dd c5235c1fc6df 93e3a4effe7c 2461d3ccacc3]
	I0923 04:37:53.308595   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:37:53.319330   20917 logs.go:276] 1 containers: [e23bcd933edc]
	I0923 04:37:53.319410   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:37:53.330426   20917 logs.go:276] 1 containers: [9f715c76cce0]
	I0923 04:37:53.330504   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:37:53.341403   20917 logs.go:276] 1 containers: [775ac458282b]
	I0923 04:37:53.341485   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:37:53.351881   20917 logs.go:276] 0 containers: []
	W0923 04:37:53.351894   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:37:53.351969   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:37:53.362578   20917 logs.go:276] 1 containers: [261771dabdad]
	I0923 04:37:53.362597   20917 logs.go:123] Gathering logs for coredns [93e3a4effe7c] ...
	I0923 04:37:53.362603   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e3a4effe7c"
	I0923 04:37:53.374582   20917 logs.go:123] Gathering logs for kube-controller-manager [775ac458282b] ...
	I0923 04:37:53.374593   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 775ac458282b"
	I0923 04:37:53.396640   20917 logs.go:123] Gathering logs for storage-provisioner [261771dabdad] ...
	I0923 04:37:53.396651   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261771dabdad"
	I0923 04:37:53.408216   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:37:53.408227   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:37:53.420035   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:37:53.420047   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:37:53.454223   20917 logs.go:123] Gathering logs for coredns [1724f5ef67dd] ...
	I0923 04:37:53.454233   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1724f5ef67dd"
	I0923 04:37:53.469781   20917 logs.go:123] Gathering logs for coredns [c5235c1fc6df] ...
	I0923 04:37:53.469794   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5235c1fc6df"
	I0923 04:37:53.481319   20917 logs.go:123] Gathering logs for kube-scheduler [e23bcd933edc] ...
	I0923 04:37:53.481331   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23bcd933edc"
	I0923 04:37:53.495730   20917 logs.go:123] Gathering logs for kube-proxy [9f715c76cce0] ...
	I0923 04:37:53.495740   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f715c76cce0"
	I0923 04:37:53.507546   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:37:53.507557   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:37:53.512553   20917 logs.go:123] Gathering logs for kube-apiserver [47f6323d8821] ...
	I0923 04:37:53.512560   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47f6323d8821"
	I0923 04:37:53.528933   20917 logs.go:123] Gathering logs for coredns [2461d3ccacc3] ...
	I0923 04:37:53.528947   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2461d3ccacc3"
	I0923 04:37:53.541513   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:37:53.541524   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:37:53.564527   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:37:53.564534   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:37:53.599123   20917 logs.go:123] Gathering logs for etcd [ea5ce8e910e0] ...
	I0923 04:37:53.599137   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5ce8e910e0"
	I0923 04:37:56.115542   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:38:01.117786   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:38:01.117928   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:38:01.135487   20917 logs.go:276] 1 containers: [47f6323d8821]
	I0923 04:38:01.135584   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:38:01.147373   20917 logs.go:276] 1 containers: [ea5ce8e910e0]
	I0923 04:38:01.147464   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:38:01.157977   20917 logs.go:276] 4 containers: [1724f5ef67dd c5235c1fc6df 93e3a4effe7c 2461d3ccacc3]
	I0923 04:38:01.158064   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:38:01.168037   20917 logs.go:276] 1 containers: [e23bcd933edc]
	I0923 04:38:01.168112   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:38:01.179161   20917 logs.go:276] 1 containers: [9f715c76cce0]
	I0923 04:38:01.179246   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:38:01.190190   20917 logs.go:276] 1 containers: [775ac458282b]
	I0923 04:38:01.190274   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:38:01.200261   20917 logs.go:276] 0 containers: []
	W0923 04:38:01.200273   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:38:01.200345   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:38:01.210474   20917 logs.go:276] 1 containers: [261771dabdad]
	I0923 04:38:01.210495   20917 logs.go:123] Gathering logs for coredns [93e3a4effe7c] ...
	I0923 04:38:01.210502   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e3a4effe7c"
	I0923 04:38:01.222375   20917 logs.go:123] Gathering logs for coredns [2461d3ccacc3] ...
	I0923 04:38:01.222385   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2461d3ccacc3"
	I0923 04:38:01.233974   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:38:01.233986   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:38:01.257846   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:38:01.257855   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:38:01.269126   20917 logs.go:123] Gathering logs for coredns [c5235c1fc6df] ...
	I0923 04:38:01.269141   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5235c1fc6df"
	I0923 04:38:01.280859   20917 logs.go:123] Gathering logs for kube-proxy [9f715c76cce0] ...
	I0923 04:38:01.280873   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f715c76cce0"
	I0923 04:38:01.292519   20917 logs.go:123] Gathering logs for kube-apiserver [47f6323d8821] ...
	I0923 04:38:01.292530   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47f6323d8821"
	I0923 04:38:01.306902   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:38:01.306913   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:38:01.311866   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:38:01.311874   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:38:01.346735   20917 logs.go:123] Gathering logs for etcd [ea5ce8e910e0] ...
	I0923 04:38:01.346745   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5ce8e910e0"
	I0923 04:38:01.360586   20917 logs.go:123] Gathering logs for coredns [1724f5ef67dd] ...
	I0923 04:38:01.360600   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1724f5ef67dd"
	I0923 04:38:01.372198   20917 logs.go:123] Gathering logs for kube-scheduler [e23bcd933edc] ...
	I0923 04:38:01.372209   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23bcd933edc"
	I0923 04:38:01.387177   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:38:01.387188   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:38:01.423509   20917 logs.go:123] Gathering logs for storage-provisioner [261771dabdad] ...
	I0923 04:38:01.423518   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261771dabdad"
	I0923 04:38:01.435087   20917 logs.go:123] Gathering logs for kube-controller-manager [775ac458282b] ...
	I0923 04:38:01.435098   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 775ac458282b"
	I0923 04:38:03.954272   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:38:08.956511   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:38:08.956626   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:38:08.972365   20917 logs.go:276] 1 containers: [47f6323d8821]
	I0923 04:38:08.972463   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:38:08.985328   20917 logs.go:276] 1 containers: [ea5ce8e910e0]
	I0923 04:38:08.985415   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:38:08.995834   20917 logs.go:276] 4 containers: [1724f5ef67dd c5235c1fc6df 93e3a4effe7c 2461d3ccacc3]
	I0923 04:38:08.995923   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:38:09.011071   20917 logs.go:276] 1 containers: [e23bcd933edc]
	I0923 04:38:09.011152   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:38:09.022304   20917 logs.go:276] 1 containers: [9f715c76cce0]
	I0923 04:38:09.022388   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:38:09.032790   20917 logs.go:276] 1 containers: [775ac458282b]
	I0923 04:38:09.032869   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:38:09.043291   20917 logs.go:276] 0 containers: []
	W0923 04:38:09.043325   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:38:09.043397   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:38:09.055136   20917 logs.go:276] 1 containers: [261771dabdad]
	I0923 04:38:09.055152   20917 logs.go:123] Gathering logs for kube-scheduler [e23bcd933edc] ...
	I0923 04:38:09.055157   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23bcd933edc"
	I0923 04:38:09.071154   20917 logs.go:123] Gathering logs for kube-controller-manager [775ac458282b] ...
	I0923 04:38:09.071163   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 775ac458282b"
	I0923 04:38:09.089838   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:38:09.089854   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:38:09.114154   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:38:09.114170   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:38:09.149492   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:38:09.149501   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:38:09.185169   20917 logs.go:123] Gathering logs for coredns [93e3a4effe7c] ...
	I0923 04:38:09.185179   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e3a4effe7c"
	I0923 04:38:09.196949   20917 logs.go:123] Gathering logs for kube-proxy [9f715c76cce0] ...
	I0923 04:38:09.196960   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f715c76cce0"
	I0923 04:38:09.208837   20917 logs.go:123] Gathering logs for storage-provisioner [261771dabdad] ...
	I0923 04:38:09.208851   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261771dabdad"
	I0923 04:38:09.220617   20917 logs.go:123] Gathering logs for coredns [1724f5ef67dd] ...
	I0923 04:38:09.220630   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1724f5ef67dd"
	I0923 04:38:09.232696   20917 logs.go:123] Gathering logs for coredns [c5235c1fc6df] ...
	I0923 04:38:09.232732   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5235c1fc6df"
	I0923 04:38:09.244935   20917 logs.go:123] Gathering logs for coredns [2461d3ccacc3] ...
	I0923 04:38:09.244947   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2461d3ccacc3"
	I0923 04:38:09.257250   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:38:09.257263   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:38:09.268712   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:38:09.268726   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:38:09.273177   20917 logs.go:123] Gathering logs for kube-apiserver [47f6323d8821] ...
	I0923 04:38:09.273183   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47f6323d8821"
	I0923 04:38:09.288108   20917 logs.go:123] Gathering logs for etcd [ea5ce8e910e0] ...
	I0923 04:38:09.288122   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5ce8e910e0"
	I0923 04:38:11.804454   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:38:16.806637   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:38:16.806896   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:38:16.826643   20917 logs.go:276] 1 containers: [47f6323d8821]
	I0923 04:38:16.826761   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:38:16.841170   20917 logs.go:276] 1 containers: [ea5ce8e910e0]
	I0923 04:38:16.841260   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:38:16.853180   20917 logs.go:276] 4 containers: [1724f5ef67dd c5235c1fc6df 93e3a4effe7c 2461d3ccacc3]
	I0923 04:38:16.853262   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:38:16.863957   20917 logs.go:276] 1 containers: [e23bcd933edc]
	I0923 04:38:16.864036   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:38:16.875215   20917 logs.go:276] 1 containers: [9f715c76cce0]
	I0923 04:38:16.875296   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:38:16.889475   20917 logs.go:276] 1 containers: [775ac458282b]
	I0923 04:38:16.889562   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:38:16.900423   20917 logs.go:276] 0 containers: []
	W0923 04:38:16.900441   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:38:16.900514   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:38:16.911051   20917 logs.go:276] 1 containers: [261771dabdad]
	I0923 04:38:16.911069   20917 logs.go:123] Gathering logs for etcd [ea5ce8e910e0] ...
	I0923 04:38:16.911075   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5ce8e910e0"
	I0923 04:38:16.926034   20917 logs.go:123] Gathering logs for coredns [93e3a4effe7c] ...
	I0923 04:38:16.926049   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e3a4effe7c"
	I0923 04:38:16.937814   20917 logs.go:123] Gathering logs for storage-provisioner [261771dabdad] ...
	I0923 04:38:16.937832   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261771dabdad"
	I0923 04:38:16.954039   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:38:16.954050   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:38:16.965807   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:38:16.965817   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:38:16.970568   20917 logs.go:123] Gathering logs for coredns [1724f5ef67dd] ...
	I0923 04:38:16.970574   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1724f5ef67dd"
	I0923 04:38:16.982251   20917 logs.go:123] Gathering logs for coredns [c5235c1fc6df] ...
	I0923 04:38:16.982267   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5235c1fc6df"
	I0923 04:38:16.994089   20917 logs.go:123] Gathering logs for kube-scheduler [e23bcd933edc] ...
	I0923 04:38:16.994103   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23bcd933edc"
	I0923 04:38:17.009154   20917 logs.go:123] Gathering logs for kube-controller-manager [775ac458282b] ...
	I0923 04:38:17.009168   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 775ac458282b"
	I0923 04:38:17.027839   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:38:17.027849   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:38:17.063405   20917 logs.go:123] Gathering logs for coredns [2461d3ccacc3] ...
	I0923 04:38:17.063417   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2461d3ccacc3"
	I0923 04:38:17.075224   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:38:17.075239   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:38:17.109362   20917 logs.go:123] Gathering logs for kube-apiserver [47f6323d8821] ...
	I0923 04:38:17.109369   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47f6323d8821"
	I0923 04:38:17.123796   20917 logs.go:123] Gathering logs for kube-proxy [9f715c76cce0] ...
	I0923 04:38:17.123807   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f715c76cce0"
	I0923 04:38:17.135392   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:38:17.135402   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:38:19.660308   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:38:24.662694   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:38:24.663012   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:38:24.686956   20917 logs.go:276] 1 containers: [47f6323d8821]
	I0923 04:38:24.687088   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:38:24.703410   20917 logs.go:276] 1 containers: [ea5ce8e910e0]
	I0923 04:38:24.703509   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:38:24.720278   20917 logs.go:276] 4 containers: [1724f5ef67dd c5235c1fc6df 93e3a4effe7c 2461d3ccacc3]
	I0923 04:38:24.720365   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:38:24.731615   20917 logs.go:276] 1 containers: [e23bcd933edc]
	I0923 04:38:24.731702   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:38:24.742352   20917 logs.go:276] 1 containers: [9f715c76cce0]
	I0923 04:38:24.742438   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:38:24.753143   20917 logs.go:276] 1 containers: [775ac458282b]
	I0923 04:38:24.753229   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:38:24.763912   20917 logs.go:276] 0 containers: []
	W0923 04:38:24.763937   20917 logs.go:278] No container was found matching "kindnet"
	I0923 04:38:24.764017   20917 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:38:24.775089   20917 logs.go:276] 1 containers: [261771dabdad]
	I0923 04:38:24.775109   20917 logs.go:123] Gathering logs for kubelet ...
	I0923 04:38:24.775115   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:38:24.810469   20917 logs.go:123] Gathering logs for dmesg ...
	I0923 04:38:24.810477   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:38:24.815028   20917 logs.go:123] Gathering logs for kube-apiserver [47f6323d8821] ...
	I0923 04:38:24.815034   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47f6323d8821"
	I0923 04:38:24.829737   20917 logs.go:123] Gathering logs for kube-controller-manager [775ac458282b] ...
	I0923 04:38:24.829747   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 775ac458282b"
	I0923 04:38:24.847311   20917 logs.go:123] Gathering logs for Docker ...
	I0923 04:38:24.847320   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:38:24.871817   20917 logs.go:123] Gathering logs for container status ...
	I0923 04:38:24.871825   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:38:24.884519   20917 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:38:24.884529   20917 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:38:24.923081   20917 logs.go:123] Gathering logs for kube-scheduler [e23bcd933edc] ...
	I0923 04:38:24.923095   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e23bcd933edc"
	I0923 04:38:24.938329   20917 logs.go:123] Gathering logs for kube-proxy [9f715c76cce0] ...
	I0923 04:38:24.938345   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f715c76cce0"
	I0923 04:38:24.951030   20917 logs.go:123] Gathering logs for coredns [1724f5ef67dd] ...
	I0923 04:38:24.951040   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1724f5ef67dd"
	I0923 04:38:24.966952   20917 logs.go:123] Gathering logs for storage-provisioner [261771dabdad] ...
	I0923 04:38:24.966964   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 261771dabdad"
	I0923 04:38:24.979213   20917 logs.go:123] Gathering logs for coredns [2461d3ccacc3] ...
	I0923 04:38:24.979229   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2461d3ccacc3"
	I0923 04:38:24.991186   20917 logs.go:123] Gathering logs for etcd [ea5ce8e910e0] ...
	I0923 04:38:24.991197   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5ce8e910e0"
	I0923 04:38:25.005242   20917 logs.go:123] Gathering logs for coredns [c5235c1fc6df] ...
	I0923 04:38:25.005254   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5235c1fc6df"
	I0923 04:38:25.017526   20917 logs.go:123] Gathering logs for coredns [93e3a4effe7c] ...
	I0923 04:38:25.017538   20917 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93e3a4effe7c"
	I0923 04:38:27.532427   20917 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:38:32.534206   20917 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:38:32.538687   20917 out.go:201] 
	W0923 04:38:32.542695   20917 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0923 04:38:32.542703   20917 out.go:270] * 
	* 
	W0923 04:38:32.543368   20917 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:38:32.555718   20917 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-903000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-09-23 04:38:32.656212 -0700 PDT m=+1340.660159210
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-903000 -n running-upgrade-903000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-903000 -n running-upgrade-903000: exit status 2 (15.617281s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-903000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-897000 sudo cat                            | cilium-897000             | jenkins | v1.34.0 | 23 Sep 24 04:28 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-897000 sudo cat                            | cilium-897000             | jenkins | v1.34.0 | 23 Sep 24 04:28 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-897000 sudo                                | cilium-897000             | jenkins | v1.34.0 | 23 Sep 24 04:28 PDT |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-897000 sudo                                | cilium-897000             | jenkins | v1.34.0 | 23 Sep 24 04:28 PDT |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-897000 sudo                                | cilium-897000             | jenkins | v1.34.0 | 23 Sep 24 04:28 PDT |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-897000 sudo cat                            | cilium-897000             | jenkins | v1.34.0 | 23 Sep 24 04:28 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-897000 sudo cat                            | cilium-897000             | jenkins | v1.34.0 | 23 Sep 24 04:28 PDT |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-897000 sudo                                | cilium-897000             | jenkins | v1.34.0 | 23 Sep 24 04:28 PDT |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-897000 sudo                                | cilium-897000             | jenkins | v1.34.0 | 23 Sep 24 04:28 PDT |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-897000 sudo                                | cilium-897000             | jenkins | v1.34.0 | 23 Sep 24 04:28 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-897000 sudo find                           | cilium-897000             | jenkins | v1.34.0 | 23 Sep 24 04:28 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-897000 sudo crio                           | cilium-897000             | jenkins | v1.34.0 | 23 Sep 24 04:28 PDT |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-897000                                     | cilium-897000             | jenkins | v1.34.0 | 23 Sep 24 04:28 PDT | 23 Sep 24 04:28 PDT |
	| start   | -p kubernetes-upgrade-842000                         | kubernetes-upgrade-842000 | jenkins | v1.34.0 | 23 Sep 24 04:28 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p offline-docker-819000                             | offline-docker-819000     | jenkins | v1.34.0 | 23 Sep 24 04:28 PDT | 23 Sep 24 04:28 PDT |
	| start   | -p stopped-upgrade-231000                            | minikube                  | jenkins | v1.26.0 | 23 Sep 24 04:28 PDT | 23 Sep 24 04:29 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-842000                         | kubernetes-upgrade-842000 | jenkins | v1.34.0 | 23 Sep 24 04:28 PDT | 23 Sep 24 04:28 PDT |
	| start   | -p kubernetes-upgrade-842000                         | kubernetes-upgrade-842000 | jenkins | v1.34.0 | 23 Sep 24 04:28 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-842000                         | kubernetes-upgrade-842000 | jenkins | v1.34.0 | 23 Sep 24 04:28 PDT | 23 Sep 24 04:28 PDT |
	| start   | -p running-upgrade-903000                            | minikube                  | jenkins | v1.26.0 | 23 Sep 24 04:28 PDT | 23 Sep 24 04:29 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-231000 stop                          | minikube                  | jenkins | v1.26.0 | 23 Sep 24 04:29 PDT | 23 Sep 24 04:29 PDT |
	| start   | -p stopped-upgrade-231000                            | stopped-upgrade-231000    | jenkins | v1.34.0 | 23 Sep 24 04:29 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-903000                            | running-upgrade-903000    | jenkins | v1.34.0 | 23 Sep 24 04:29 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-231000                            | stopped-upgrade-231000    | jenkins | v1.34.0 | 23 Sep 24 04:38 PDT | 23 Sep 24 04:38 PDT |
	| start   | -p pause-309000 --memory=2048                        | pause-309000              | jenkins | v1.34.0 | 23 Sep 24 04:38 PDT |                     |
	|         | --install-addons=false                               |                           |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                            |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 04:38:44
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 04:38:44.836836   21406 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:38:44.836970   21406 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:38:44.836972   21406 out.go:358] Setting ErrFile to fd 2...
	I0923 04:38:44.836974   21406 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:38:44.837111   21406 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:38:44.838314   21406 out.go:352] Setting JSON to false
	I0923 04:38:44.856132   21406 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9495,"bootTime":1727082029,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:38:44.856204   21406 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:38:44.861126   21406 out.go:177] * [pause-309000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:38:44.870040   21406 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 04:38:44.870078   21406 notify.go:220] Checking for updates...
	I0923 04:38:44.878019   21406 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:38:44.881058   21406 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:38:44.885050   21406 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:38:44.888052   21406 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	I0923 04:38:44.891039   21406 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 04:38:44.894375   21406 config.go:182] Loaded profile config "multinode-090000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:38:44.894453   21406 config.go:182] Loaded profile config "running-upgrade-903000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 04:38:44.894520   21406 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:38:44.897995   21406 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 04:38:44.905028   21406 start.go:297] selected driver: qemu2
	I0923 04:38:44.905030   21406 start.go:901] validating driver "qemu2" against <nil>
	I0923 04:38:44.905035   21406 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 04:38:44.907591   21406 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 04:38:44.909958   21406 out.go:177] * Automatically selected the socket_vmnet network
	I0923 04:38:44.913070   21406 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 04:38:44.913086   21406 cni.go:84] Creating CNI manager for ""
	I0923 04:38:44.913105   21406 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 04:38:44.913110   21406 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 04:38:44.913134   21406 start.go:340] cluster config:
	{Name:pause-309000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:pause-309000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_cli
ent SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:38:44.916831   21406 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:38:44.924023   21406 out.go:177] * Starting "pause-309000" primary control-plane node in "pause-309000" cluster
	I0923 04:38:44.928022   21406 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 04:38:44.928036   21406 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 04:38:44.928045   21406 cache.go:56] Caching tarball of preloaded images
	I0923 04:38:44.928115   21406 preload.go:172] Found /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 04:38:44.928122   21406 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 04:38:44.928184   21406 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/pause-309000/config.json ...
	I0923 04:38:44.928194   21406 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/pause-309000/config.json: {Name:mk152acc9cf7882dbc8f295539031fd33ff17113 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 04:38:44.928413   21406 start.go:360] acquireMachinesLock for pause-309000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:38:44.928445   21406 start.go:364] duration metric: took 28.667µs to acquireMachinesLock for "pause-309000"
	I0923 04:38:44.928456   21406 start.go:93] Provisioning new machine with config: &{Name:pause-309000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:pause-309000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:38:44.928482   21406 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:38:44.937011   21406 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0923 04:38:44.961641   21406 start.go:159] libmachine.API.Create for "pause-309000" (driver="qemu2")
	I0923 04:38:44.961670   21406 client.go:168] LocalClient.Create starting
	I0923 04:38:44.961738   21406 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:38:44.961767   21406 main.go:141] libmachine: Decoding PEM data...
	I0923 04:38:44.961775   21406 main.go:141] libmachine: Parsing certificate...
	I0923 04:38:44.961815   21406 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:38:44.961836   21406 main.go:141] libmachine: Decoding PEM data...
	I0923 04:38:44.961844   21406 main.go:141] libmachine: Parsing certificate...
	I0923 04:38:44.962200   21406 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:38:45.346165   21406 main.go:141] libmachine: Creating SSH key...
	I0923 04:38:45.589192   21406 main.go:141] libmachine: Creating Disk image...
	I0923 04:38:45.589199   21406 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:38:45.589467   21406 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/pause-309000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/pause-309000/disk.qcow2
	I0923 04:38:45.599985   21406 main.go:141] libmachine: STDOUT: 
	I0923 04:38:45.599997   21406 main.go:141] libmachine: STDERR: 
	I0923 04:38:45.600066   21406 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/pause-309000/disk.qcow2 +20000M
	I0923 04:38:45.608203   21406 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:38:45.608213   21406 main.go:141] libmachine: STDERR: 
	I0923 04:38:45.608230   21406 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/pause-309000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/pause-309000/disk.qcow2
	I0923 04:38:45.608233   21406 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:38:45.608244   21406 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:38:45.608271   21406 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/pause-309000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/pause-309000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/pause-309000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:0d:3c:e4:57:72 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/pause-309000/disk.qcow2
	I0923 04:38:45.610114   21406 main.go:141] libmachine: STDOUT: 
	I0923 04:38:45.610122   21406 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:38:45.610143   21406 client.go:171] duration metric: took 648.47175ms to LocalClient.Create
	I0923 04:38:47.612344   21406 start.go:128] duration metric: took 2.683849625s to createHost
	I0923 04:38:47.612388   21406 start.go:83] releasing machines lock for "pause-309000", held for 2.683949875s
	W0923 04:38:47.612510   21406 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:38:47.626202   21406 out.go:177] * Deleting "pause-309000" in qemu2 ...
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-09-23 11:29:10 UTC, ends at Mon 2024-09-23 11:38:48 UTC. --
	Sep 23 11:38:33 running-upgrade-903000 dockerd[4555]: time="2024-09-23T11:38:33.427253810Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:38:33 running-upgrade-903000 dockerd[4555]: time="2024-09-23T11:38:33.427307518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:38:33 running-upgrade-903000 dockerd[4555]: time="2024-09-23T11:38:33.427331101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:38:33 running-upgrade-903000 dockerd[4555]: time="2024-09-23T11:38:33.427397017Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/c32fb86c35c5bf718e37d9c0e12bddf539c452a2820e226a27e7d5b9ea92de73 pid=20732 runtime=io.containerd.runc.v2
	Sep 23 11:38:34 running-upgrade-903000 cri-dockerd[4275]: time="2024-09-23T11:38:34Z" level=error msg="ContainerStats resp: {0x4000777980 linux}"
	Sep 23 11:38:34 running-upgrade-903000 cri-dockerd[4275]: time="2024-09-23T11:38:34Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 23 11:38:35 running-upgrade-903000 cri-dockerd[4275]: time="2024-09-23T11:38:35Z" level=error msg="ContainerStats resp: {0x40004c8340 linux}"
	Sep 23 11:38:35 running-upgrade-903000 cri-dockerd[4275]: time="2024-09-23T11:38:35Z" level=error msg="ContainerStats resp: {0x40004c8cc0 linux}"
	Sep 23 11:38:35 running-upgrade-903000 cri-dockerd[4275]: time="2024-09-23T11:38:35Z" level=error msg="ContainerStats resp: {0x40004c8e00 linux}"
	Sep 23 11:38:35 running-upgrade-903000 cri-dockerd[4275]: time="2024-09-23T11:38:35Z" level=error msg="ContainerStats resp: {0x40004c92c0 linux}"
	Sep 23 11:38:35 running-upgrade-903000 cri-dockerd[4275]: time="2024-09-23T11:38:35Z" level=error msg="ContainerStats resp: {0x4000912140 linux}"
	Sep 23 11:38:35 running-upgrade-903000 cri-dockerd[4275]: time="2024-09-23T11:38:35Z" level=error msg="ContainerStats resp: {0x4000872a00 linux}"
	Sep 23 11:38:35 running-upgrade-903000 cri-dockerd[4275]: time="2024-09-23T11:38:35Z" level=error msg="ContainerStats resp: {0x4000872e40 linux}"
	Sep 23 11:38:39 running-upgrade-903000 cri-dockerd[4275]: time="2024-09-23T11:38:39Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 23 11:38:44 running-upgrade-903000 cri-dockerd[4275]: time="2024-09-23T11:38:44Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 23 11:38:45 running-upgrade-903000 cri-dockerd[4275]: time="2024-09-23T11:38:45Z" level=error msg="ContainerStats resp: {0x4000880200 linux}"
	Sep 23 11:38:45 running-upgrade-903000 cri-dockerd[4275]: time="2024-09-23T11:38:45Z" level=error msg="ContainerStats resp: {0x40007fe840 linux}"
	Sep 23 11:38:46 running-upgrade-903000 cri-dockerd[4275]: time="2024-09-23T11:38:46Z" level=error msg="ContainerStats resp: {0x40008806c0 linux}"
	Sep 23 11:38:47 running-upgrade-903000 cri-dockerd[4275]: time="2024-09-23T11:38:47Z" level=error msg="ContainerStats resp: {0x400094b800 linux}"
	Sep 23 11:38:47 running-upgrade-903000 cri-dockerd[4275]: time="2024-09-23T11:38:47Z" level=error msg="ContainerStats resp: {0x4000881580 linux}"
	Sep 23 11:38:47 running-upgrade-903000 cri-dockerd[4275]: time="2024-09-23T11:38:47Z" level=error msg="ContainerStats resp: {0x4000881e00 linux}"
	Sep 23 11:38:47 running-upgrade-903000 cri-dockerd[4275]: time="2024-09-23T11:38:47Z" level=error msg="ContainerStats resp: {0x400041ec80 linux}"
	Sep 23 11:38:47 running-upgrade-903000 cri-dockerd[4275]: time="2024-09-23T11:38:47Z" level=error msg="ContainerStats resp: {0x40004c8a00 linux}"
	Sep 23 11:38:47 running-upgrade-903000 cri-dockerd[4275]: time="2024-09-23T11:38:47Z" level=error msg="ContainerStats resp: {0x40004c8f80 linux}"
	Sep 23 11:38:47 running-upgrade-903000 cri-dockerd[4275]: time="2024-09-23T11:38:47Z" level=error msg="ContainerStats resp: {0x4000992480 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	fb41457888c45       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   d22778a78a3c0
	c32fb86c35c5b       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   33f5df6a25dee
	1724f5ef67dd7       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   d22778a78a3c0
	c5235c1fc6dfa       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   33f5df6a25dee
	9f715c76cce0f       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   bcec430cf5725
	261771dabdad9       ba04bb24b9575       4 minutes ago       Running             storage-provisioner       0                   c0898e5c53116
	ea5ce8e910e00       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   a9ffb605c91d4
	e23bcd933edc6       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   e408f01307a00
	47f6323d8821b       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   cf2c9b4b31e86
	775ac458282bf       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   e5244694cdcb4
	
	
	==> coredns [1724f5ef67dd] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1917227937327077612.7202797282560796122. HINFO: read udp 10.244.0.2:59583->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1917227937327077612.7202797282560796122. HINFO: read udp 10.244.0.2:57811->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1917227937327077612.7202797282560796122. HINFO: read udp 10.244.0.2:40925->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1917227937327077612.7202797282560796122. HINFO: read udp 10.244.0.2:49962->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1917227937327077612.7202797282560796122. HINFO: read udp 10.244.0.2:58021->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1917227937327077612.7202797282560796122. HINFO: read udp 10.244.0.2:48320->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1917227937327077612.7202797282560796122. HINFO: read udp 10.244.0.2:37369->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1917227937327077612.7202797282560796122. HINFO: read udp 10.244.0.2:53493->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1917227937327077612.7202797282560796122. HINFO: read udp 10.244.0.2:37456->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1917227937327077612.7202797282560796122. HINFO: read udp 10.244.0.2:41661->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c32fb86c35c5] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7861242746019909375.8986269642685420212. HINFO: read udp 10.244.0.3:40101->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7861242746019909375.8986269642685420212. HINFO: read udp 10.244.0.3:50073->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7861242746019909375.8986269642685420212. HINFO: read udp 10.244.0.3:46116->10.0.2.3:53: i/o timeout
	
	
	==> coredns [c5235c1fc6df] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3820126376398250149.4248488088500515840. HINFO: read udp 10.244.0.3:33635->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3820126376398250149.4248488088500515840. HINFO: read udp 10.244.0.3:57230->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3820126376398250149.4248488088500515840. HINFO: read udp 10.244.0.3:43061->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3820126376398250149.4248488088500515840. HINFO: read udp 10.244.0.3:35443->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3820126376398250149.4248488088500515840. HINFO: read udp 10.244.0.3:56907->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3820126376398250149.4248488088500515840. HINFO: read udp 10.244.0.3:50015->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3820126376398250149.4248488088500515840. HINFO: read udp 10.244.0.3:59624->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3820126376398250149.4248488088500515840. HINFO: read udp 10.244.0.3:57791->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3820126376398250149.4248488088500515840. HINFO: read udp 10.244.0.3:52850->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3820126376398250149.4248488088500515840. HINFO: read udp 10.244.0.3:32833->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [fb41457888c4] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5302188393583319690.3992580982438899729. HINFO: read udp 10.244.0.2:46814->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5302188393583319690.3992580982438899729. HINFO: read udp 10.244.0.2:52997->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5302188393583319690.3992580982438899729. HINFO: read udp 10.244.0.2:35700->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-903000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-903000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=running-upgrade-903000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T04_34_31_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 11:34:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-903000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 11:38:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 11:34:31 +0000   Mon, 23 Sep 2024 11:34:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 11:34:31 +0000   Mon, 23 Sep 2024 11:34:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 11:34:31 +0000   Mon, 23 Sep 2024 11:34:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 11:34:31 +0000   Mon, 23 Sep 2024 11:34:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-903000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 5602ab4950784f40b5100af320f5fd1d
	  System UUID:                5602ab4950784f40b5100af320f5fd1d
	  Boot ID:                    4ba2f087-5763-4ed7-bf06-b8c9660d3c71
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-5gsfs                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m3s
	  kube-system                 coredns-6d4b75cb6d-sb6mb                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m3s
	  kube-system                 etcd-running-upgrade-903000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-903000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-controller-manager-running-upgrade-903000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 kube-proxy-68s5b                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 kube-scheduler-running-upgrade-903000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  Starting                 4m23s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m22s (x3 over 4m23s)  kubelet          Node running-upgrade-903000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m22s (x3 over 4m23s)  kubelet          Node running-upgrade-903000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m22s (x3 over 4m23s)  kubelet          Node running-upgrade-903000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s                  kubelet          Node running-upgrade-903000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s                  kubelet          Node running-upgrade-903000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s                  kubelet          Node running-upgrade-903000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m17s                  kubelet          Node running-upgrade-903000 status is now: NodeReady
	  Normal  RegisteredNode           4m4s                   node-controller  Node running-upgrade-903000 event: Registered Node running-upgrade-903000 in Controller
	
	
	==> dmesg <==
	[  +0.136480] systemd-fstab-generator[867]: Ignoring "noauto" for root device
	[  +0.059613] systemd-fstab-generator[878]: Ignoring "noauto" for root device
	[  +0.059869] systemd-fstab-generator[889]: Ignoring "noauto" for root device
	[  +1.208862] systemd-fstab-generator[1038]: Ignoring "noauto" for root device
	[  +0.060000] systemd-fstab-generator[1049]: Ignoring "noauto" for root device
	[  +2.299622] systemd-fstab-generator[1279]: Ignoring "noauto" for root device
	[  +9.623355] systemd-fstab-generator[1912]: Ignoring "noauto" for root device
	[ +14.356650] kauditd_printk_skb: 86 callbacks suppressed
	[  +0.871055] systemd-fstab-generator[2572]: Ignoring "noauto" for root device
	[  +0.200070] systemd-fstab-generator[2657]: Ignoring "noauto" for root device
	[  +0.099907] systemd-fstab-generator[2668]: Ignoring "noauto" for root device
	[  +0.112723] systemd-fstab-generator[2681]: Ignoring "noauto" for root device
	[  +5.153671] kauditd_printk_skb: 10 callbacks suppressed
	[Sep23 11:30] systemd-fstab-generator[4232]: Ignoring "noauto" for root device
	[  +0.088164] systemd-fstab-generator[4243]: Ignoring "noauto" for root device
	[  +0.081354] systemd-fstab-generator[4254]: Ignoring "noauto" for root device
	[  +0.100541] systemd-fstab-generator[4268]: Ignoring "noauto" for root device
	[  +2.591830] systemd-fstab-generator[4542]: Ignoring "noauto" for root device
	[  +2.172103] systemd-fstab-generator[4886]: Ignoring "noauto" for root device
	[  +1.063548] systemd-fstab-generator[5031]: Ignoring "noauto" for root device
	[  +5.160646] kauditd_printk_skb: 80 callbacks suppressed
	[  +5.018475] kauditd_printk_skb: 1 callbacks suppressed
	[Sep23 11:34] systemd-fstab-generator[13772]: Ignoring "noauto" for root device
	[  +6.134526] systemd-fstab-generator[14373]: Ignoring "noauto" for root device
	[  +0.462492] systemd-fstab-generator[14507]: Ignoring "noauto" for root device
	
	
	==> etcd [ea5ce8e910e0] <==
	{"level":"info","ts":"2024-09-23T11:34:27.240Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-23T11:34:27.240Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-23T11:34:27.240Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-09-23T11:34:27.240Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-23T11:34:27.240Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-23T11:34:27.241Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-09-23T11:34:27.241Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-09-23T11:34:27.926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-23T11:34:27.926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-23T11:34:27.926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-09-23T11:34:27.926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-09-23T11:34:27.926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-23T11:34:27.926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-09-23T11:34:27.926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-23T11:34:27.927Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-903000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-23T11:34:27.927Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T11:34:27.929Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T11:34:27.930Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-23T11:34:27.930Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T11:34:27.931Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-09-23T11:34:27.931Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T11:34:27.931Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T11:34:27.931Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T11:34:27.931Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T11:34:27.931Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 11:38:48 up 9 min,  0 users,  load average: 0.08, 0.23, 0.15
	Linux running-upgrade-903000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [47f6323d8821] <==
	I0923 11:34:29.331796       1 controller.go:611] quota admission added evaluator for: namespaces
	I0923 11:34:29.373466       1 cache.go:39] Caches are synced for autoregister controller
	I0923 11:34:29.373597       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0923 11:34:29.373637       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0923 11:34:29.374672       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0923 11:34:29.374739       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0923 11:34:29.377964       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0923 11:34:30.110782       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0923 11:34:30.278878       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0923 11:34:30.282050       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0923 11:34:30.282077       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0923 11:34:30.419163       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0923 11:34:30.429153       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0923 11:34:30.536098       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0923 11:34:30.538003       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0923 11:34:30.538356       1 controller.go:611] quota admission added evaluator for: endpoints
	I0923 11:34:30.539698       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0923 11:34:31.419598       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0923 11:34:31.805709       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0923 11:34:31.809237       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0923 11:34:31.814205       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0923 11:34:31.863450       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0923 11:34:45.024354       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0923 11:34:45.074901       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0923 11:34:45.549007       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [775ac458282b] <==
	I0923 11:34:44.371986       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0923 11:34:44.372009       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-903000. Assuming now as a timestamp.
	I0923 11:34:44.372025       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0923 11:34:44.372119       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0923 11:34:44.372164       1 event.go:294] "Event occurred" object="running-upgrade-903000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-903000 event: Registered Node running-upgrade-903000 in Controller"
	I0923 11:34:44.372784       1 shared_informer.go:262] Caches are synced for stateful set
	I0923 11:34:44.385618       1 shared_informer.go:262] Caches are synced for PVC protection
	I0923 11:34:44.387751       1 shared_informer.go:262] Caches are synced for GC
	I0923 11:34:44.392324       1 shared_informer.go:262] Caches are synced for job
	I0923 11:34:44.395484       1 shared_informer.go:262] Caches are synced for persistent volume
	I0923 11:34:44.397688       1 shared_informer.go:262] Caches are synced for endpoint
	I0923 11:34:44.420883       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0923 11:34:44.422643       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0923 11:34:44.428090       1 shared_informer.go:262] Caches are synced for resource quota
	I0923 11:34:44.434264       1 shared_informer.go:262] Caches are synced for disruption
	I0923 11:34:44.434275       1 disruption.go:371] Sending events to api server.
	I0923 11:34:44.470664       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0923 11:34:44.476176       1 shared_informer.go:262] Caches are synced for resource quota
	I0923 11:34:44.890420       1 shared_informer.go:262] Caches are synced for garbage collector
	I0923 11:34:44.923395       1 shared_informer.go:262] Caches are synced for garbage collector
	I0923 11:34:44.923404       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0923 11:34:45.027382       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-68s5b"
	I0923 11:34:45.076009       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0923 11:34:45.276223       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-5gsfs"
	I0923 11:34:45.280520       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-sb6mb"
	
	
	==> kube-proxy [9f715c76cce0] <==
	I0923 11:34:45.535670       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0923 11:34:45.535696       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0923 11:34:45.535709       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0923 11:34:45.547075       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0923 11:34:45.547147       1 server_others.go:206] "Using iptables Proxier"
	I0923 11:34:45.547172       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0923 11:34:45.547324       1 server.go:661] "Version info" version="v1.24.1"
	I0923 11:34:45.547331       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 11:34:45.547616       1 config.go:317] "Starting service config controller"
	I0923 11:34:45.547628       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0923 11:34:45.547636       1 config.go:226] "Starting endpoint slice config controller"
	I0923 11:34:45.547647       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0923 11:34:45.547913       1 config.go:444] "Starting node config controller"
	I0923 11:34:45.547915       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0923 11:34:45.651206       1 shared_informer.go:262] Caches are synced for node config
	I0923 11:34:45.651222       1 shared_informer.go:262] Caches are synced for service config
	I0923 11:34:45.651228       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [e23bcd933edc] <==
	W0923 11:34:29.339572       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 11:34:29.339592       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0923 11:34:29.339618       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0923 11:34:29.339666       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0923 11:34:29.339791       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 11:34:29.339799       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0923 11:34:29.339878       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 11:34:29.339906       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0923 11:34:29.339951       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 11:34:29.339958       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0923 11:34:29.340033       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 11:34:29.340039       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0923 11:34:29.340066       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 11:34:29.340072       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0923 11:34:29.340084       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 11:34:29.340087       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0923 11:34:30.158390       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0923 11:34:30.158433       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0923 11:34:30.161007       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 11:34:30.161021       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0923 11:34:30.280993       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 11:34:30.281018       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0923 11:34:30.357908       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 11:34:30.358007       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0923 11:34:30.838445       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-09-23 11:29:10 UTC, ends at Mon 2024-09-23 11:38:48 UTC. --
	Sep 23 11:34:33 running-upgrade-903000 kubelet[14379]: I0923 11:34:33.877085   14379 kubelet_getters.go:300] "Path does not exist" path="/var/lib/kubelet/pods/4e9861f6-a33f-4b03-b904-c3b69208486a/volumes"
	Sep 23 11:34:34 running-upgrade-903000 kubelet[14379]: I0923 11:34:34.033473   14379 request.go:601] Waited for 1.136125362s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Sep 23 11:34:34 running-upgrade-903000 kubelet[14379]: E0923 11:34:34.036664   14379 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-903000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-903000"
	Sep 23 11:34:44 running-upgrade-903000 kubelet[14379]: I0923 11:34:44.355276   14379 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 23 11:34:44 running-upgrade-903000 kubelet[14379]: I0923 11:34:44.355604   14379 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 23 11:34:44 running-upgrade-903000 kubelet[14379]: I0923 11:34:44.377777   14379 topology_manager.go:200] "Topology Admit Handler"
	Sep 23 11:34:44 running-upgrade-903000 kubelet[14379]: I0923 11:34:44.557440   14379 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/75a577b3-99c7-470a-a18b-ab167796ba6a-tmp\") pod \"storage-provisioner\" (UID: \"75a577b3-99c7-470a-a18b-ab167796ba6a\") " pod="kube-system/storage-provisioner"
	Sep 23 11:34:44 running-upgrade-903000 kubelet[14379]: I0923 11:34:44.557471   14379 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfr2z\" (UniqueName: \"kubernetes.io/projected/75a577b3-99c7-470a-a18b-ab167796ba6a-kube-api-access-mfr2z\") pod \"storage-provisioner\" (UID: \"75a577b3-99c7-470a-a18b-ab167796ba6a\") " pod="kube-system/storage-provisioner"
	Sep 23 11:34:44 running-upgrade-903000 kubelet[14379]: E0923 11:34:44.663663   14379 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Sep 23 11:34:44 running-upgrade-903000 kubelet[14379]: E0923 11:34:44.663689   14379 projected.go:192] Error preparing data for projected volume kube-api-access-mfr2z for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Sep 23 11:34:44 running-upgrade-903000 kubelet[14379]: E0923 11:34:44.663753   14379 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/75a577b3-99c7-470a-a18b-ab167796ba6a-kube-api-access-mfr2z podName:75a577b3-99c7-470a-a18b-ab167796ba6a nodeName:}" failed. No retries permitted until 2024-09-23 11:34:45.163736917 +0000 UTC m=+13.370522516 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mfr2z" (UniqueName: "kubernetes.io/projected/75a577b3-99c7-470a-a18b-ab167796ba6a-kube-api-access-mfr2z") pod "storage-provisioner" (UID: "75a577b3-99c7-470a-a18b-ab167796ba6a") : configmap "kube-root-ca.crt" not found
	Sep 23 11:34:45 running-upgrade-903000 kubelet[14379]: I0923 11:34:45.030727   14379 topology_manager.go:200] "Topology Admit Handler"
	Sep 23 11:34:45 running-upgrade-903000 kubelet[14379]: I0923 11:34:45.164699   14379 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3eab187e-5eb7-4dbd-9fb3-a903e6502c4f-lib-modules\") pod \"kube-proxy-68s5b\" (UID: \"3eab187e-5eb7-4dbd-9fb3-a903e6502c4f\") " pod="kube-system/kube-proxy-68s5b"
	Sep 23 11:34:45 running-upgrade-903000 kubelet[14379]: I0923 11:34:45.164731   14379 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3eab187e-5eb7-4dbd-9fb3-a903e6502c4f-kube-proxy\") pod \"kube-proxy-68s5b\" (UID: \"3eab187e-5eb7-4dbd-9fb3-a903e6502c4f\") " pod="kube-system/kube-proxy-68s5b"
	Sep 23 11:34:45 running-upgrade-903000 kubelet[14379]: I0923 11:34:45.164742   14379 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3eab187e-5eb7-4dbd-9fb3-a903e6502c4f-xtables-lock\") pod \"kube-proxy-68s5b\" (UID: \"3eab187e-5eb7-4dbd-9fb3-a903e6502c4f\") " pod="kube-system/kube-proxy-68s5b"
	Sep 23 11:34:45 running-upgrade-903000 kubelet[14379]: I0923 11:34:45.164767   14379 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x48jd\" (UniqueName: \"kubernetes.io/projected/3eab187e-5eb7-4dbd-9fb3-a903e6502c4f-kube-api-access-x48jd\") pod \"kube-proxy-68s5b\" (UID: \"3eab187e-5eb7-4dbd-9fb3-a903e6502c4f\") " pod="kube-system/kube-proxy-68s5b"
	Sep 23 11:34:45 running-upgrade-903000 kubelet[14379]: I0923 11:34:45.277709   14379 topology_manager.go:200] "Topology Admit Handler"
	Sep 23 11:34:45 running-upgrade-903000 kubelet[14379]: I0923 11:34:45.284654   14379 topology_manager.go:200] "Topology Admit Handler"
	Sep 23 11:34:45 running-upgrade-903000 kubelet[14379]: I0923 11:34:45.366726   14379 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b71cb9ca-1d7f-4997-ad41-cf993c099a3d-config-volume\") pod \"coredns-6d4b75cb6d-5gsfs\" (UID: \"b71cb9ca-1d7f-4997-ad41-cf993c099a3d\") " pod="kube-system/coredns-6d4b75cb6d-5gsfs"
	Sep 23 11:34:45 running-upgrade-903000 kubelet[14379]: I0923 11:34:45.366747   14379 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z788z\" (UniqueName: \"kubernetes.io/projected/b13df656-a074-4539-8c80-8239facc8a42-kube-api-access-z788z\") pod \"coredns-6d4b75cb6d-sb6mb\" (UID: \"b13df656-a074-4539-8c80-8239facc8a42\") " pod="kube-system/coredns-6d4b75cb6d-sb6mb"
	Sep 23 11:34:45 running-upgrade-903000 kubelet[14379]: I0923 11:34:45.366800   14379 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5s4p\" (UniqueName: \"kubernetes.io/projected/b71cb9ca-1d7f-4997-ad41-cf993c099a3d-kube-api-access-d5s4p\") pod \"coredns-6d4b75cb6d-5gsfs\" (UID: \"b71cb9ca-1d7f-4997-ad41-cf993c099a3d\") " pod="kube-system/coredns-6d4b75cb6d-5gsfs"
	Sep 23 11:34:45 running-upgrade-903000 kubelet[14379]: I0923 11:34:45.366812   14379 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b13df656-a074-4539-8c80-8239facc8a42-config-volume\") pod \"coredns-6d4b75cb6d-sb6mb\" (UID: \"b13df656-a074-4539-8c80-8239facc8a42\") " pod="kube-system/coredns-6d4b75cb6d-sb6mb"
	Sep 23 11:34:46 running-upgrade-903000 kubelet[14379]: I0923 11:34:46.046678   14379 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="d22778a78a3c045f3503bc0d66ffa9a641ac33f1c7c497e27a0a7b2465853fd9"
	Sep 23 11:38:34 running-upgrade-903000 kubelet[14379]: I0923 11:38:34.114387   14379 scope.go:110] "RemoveContainer" containerID="2461d3ccacc3f458534f053aa7e696be95f6ac5e73d0376a545444db28ffe9b4"
	Sep 23 11:38:34 running-upgrade-903000 kubelet[14379]: I0923 11:38:34.135575   14379 scope.go:110] "RemoveContainer" containerID="93e3a4effe7cad9eeb1cae1843e7b30ae65fd88ae0c17603ef8fb7cba7eb8219"
	
	
	==> storage-provisioner [261771dabdad] <==
	I0923 11:34:45.506626       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 11:34:45.513122       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 11:34:45.513141       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 11:34:45.516620       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 11:34:45.516703       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-903000_713e7b1b-ddc7-4bf5-b0bf-350c5804f16d!
	I0923 11:34:45.517048       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"61cdd06d-b998-4399-9939-8b550bc308dd", APIVersion:"v1", ResourceVersion:"373", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-903000_713e7b1b-ddc7-4bf5-b0bf-350c5804f16d became leader
	I0923 11:34:45.617183       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-903000_713e7b1b-ddc7-4bf5-b0bf-350c5804f16d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-903000 -n running-upgrade-903000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-903000 -n running-upgrade-903000: exit status 2 (15.581180541s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-903000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-903000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-903000
--- FAIL: TestRunningBinaryUpgrade (629.02s)

                                                
                                    
x
+
TestKubernetesUpgrade (19.26s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-842000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-842000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (10.329571833s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-842000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-842000" primary control-plane node in "kubernetes-upgrade-842000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-842000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:28:16.396760   20790 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:28:16.396897   20790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:28:16.396900   20790 out.go:358] Setting ErrFile to fd 2...
	I0923 04:28:16.396903   20790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:28:16.397031   20790 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:28:16.398106   20790 out.go:352] Setting JSON to false
	I0923 04:28:16.414147   20790 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8867,"bootTime":1727082029,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:28:16.414221   20790 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:28:16.418738   20790 out.go:177] * [kubernetes-upgrade-842000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:28:16.424789   20790 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 04:28:16.424839   20790 notify.go:220] Checking for updates...
	I0923 04:28:16.432747   20790 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:28:16.434283   20790 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:28:16.436756   20790 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:28:16.439749   20790 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	I0923 04:28:16.442753   20790 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 04:28:16.446011   20790 config.go:182] Loaded profile config "multinode-090000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:28:16.446075   20790 config.go:182] Loaded profile config "offline-docker-819000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:28:16.446111   20790 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:28:16.450740   20790 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 04:28:16.457695   20790 start.go:297] selected driver: qemu2
	I0923 04:28:16.457701   20790 start.go:901] validating driver "qemu2" against <nil>
	I0923 04:28:16.457708   20790 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 04:28:16.459898   20790 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 04:28:16.463730   20790 out.go:177] * Automatically selected the socket_vmnet network
	I0923 04:28:16.466825   20790 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 04:28:16.466840   20790 cni.go:84] Creating CNI manager for ""
	I0923 04:28:16.466862   20790 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0923 04:28:16.466889   20790 start.go:340] cluster config:
	{Name:kubernetes-upgrade-842000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:28:16.470537   20790 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:28:16.477759   20790 out.go:177] * Starting "kubernetes-upgrade-842000" primary control-plane node in "kubernetes-upgrade-842000" cluster
	I0923 04:28:16.481642   20790 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 04:28:16.481663   20790 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0923 04:28:16.481668   20790 cache.go:56] Caching tarball of preloaded images
	I0923 04:28:16.481760   20790 preload.go:172] Found /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 04:28:16.481765   20790 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0923 04:28:16.481832   20790 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/kubernetes-upgrade-842000/config.json ...
	I0923 04:28:16.481843   20790 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/kubernetes-upgrade-842000/config.json: {Name:mkdd488591ff01683cba7d453d4081966095312e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 04:28:16.482210   20790 start.go:360] acquireMachinesLock for kubernetes-upgrade-842000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:28:16.601313   20790 start.go:364] duration metric: took 119.088542ms to acquireMachinesLock for "kubernetes-upgrade-842000"
	I0923 04:28:16.601365   20790 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-842000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:28:16.601484   20790 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:28:16.608807   20790 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 04:28:16.646994   20790 start.go:159] libmachine.API.Create for "kubernetes-upgrade-842000" (driver="qemu2")
	I0923 04:28:16.647035   20790 client.go:168] LocalClient.Create starting
	I0923 04:28:16.647129   20790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:28:16.647182   20790 main.go:141] libmachine: Decoding PEM data...
	I0923 04:28:16.647196   20790 main.go:141] libmachine: Parsing certificate...
	I0923 04:28:16.647254   20790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:28:16.647293   20790 main.go:141] libmachine: Decoding PEM data...
	I0923 04:28:16.647308   20790 main.go:141] libmachine: Parsing certificate...
	I0923 04:28:16.647913   20790 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:28:16.995177   20790 main.go:141] libmachine: Creating SSH key...
	I0923 04:28:17.276569   20790 main.go:141] libmachine: Creating Disk image...
	I0923 04:28:17.276577   20790 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:28:17.276839   20790 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubernetes-upgrade-842000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubernetes-upgrade-842000/disk.qcow2
	I0923 04:28:17.286642   20790 main.go:141] libmachine: STDOUT: 
	I0923 04:28:17.286660   20790 main.go:141] libmachine: STDERR: 
	I0923 04:28:17.286718   20790 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubernetes-upgrade-842000/disk.qcow2 +20000M
	I0923 04:28:17.294674   20790 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:28:17.294694   20790 main.go:141] libmachine: STDERR: 
	I0923 04:28:17.294709   20790 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubernetes-upgrade-842000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubernetes-upgrade-842000/disk.qcow2
	I0923 04:28:17.294715   20790 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:28:17.294723   20790 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:28:17.294756   20790 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubernetes-upgrade-842000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubernetes-upgrade-842000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubernetes-upgrade-842000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:47:29:1a:ff:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubernetes-upgrade-842000/disk.qcow2
	I0923 04:28:17.296382   20790 main.go:141] libmachine: STDOUT: 
	I0923 04:28:17.296394   20790 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:28:17.296414   20790 client.go:171] duration metric: took 649.375291ms to LocalClient.Create
	I0923 04:28:19.298596   20790 start.go:128] duration metric: took 2.69709675s to createHost
	I0923 04:28:19.298661   20790 start.go:83] releasing machines lock for "kubernetes-upgrade-842000", held for 2.69733475s
	W0923 04:28:19.298775   20790 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:28:19.321276   20790 out.go:177] * Deleting "kubernetes-upgrade-842000" in qemu2 ...
	W0923 04:28:19.356774   20790 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:28:19.356797   20790 start.go:729] Will try again in 5 seconds ...
	I0923 04:28:24.358945   20790 start.go:360] acquireMachinesLock for kubernetes-upgrade-842000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:28:24.359086   20790 start.go:364] duration metric: took 105.459µs to acquireMachinesLock for "kubernetes-upgrade-842000"
	I0923 04:28:24.359122   20790 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-842000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:28:24.359198   20790 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:28:24.367382   20790 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 04:28:24.390570   20790 start.go:159] libmachine.API.Create for "kubernetes-upgrade-842000" (driver="qemu2")
	I0923 04:28:24.390607   20790 client.go:168] LocalClient.Create starting
	I0923 04:28:24.390672   20790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:28:24.390712   20790 main.go:141] libmachine: Decoding PEM data...
	I0923 04:28:24.390721   20790 main.go:141] libmachine: Parsing certificate...
	I0923 04:28:24.390756   20790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:28:24.390791   20790 main.go:141] libmachine: Decoding PEM data...
	I0923 04:28:24.390798   20790 main.go:141] libmachine: Parsing certificate...
	I0923 04:28:24.391132   20790 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:28:24.554530   20790 main.go:141] libmachine: Creating SSH key...
	I0923 04:28:24.644382   20790 main.go:141] libmachine: Creating Disk image...
	I0923 04:28:24.644392   20790 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:28:24.644624   20790 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubernetes-upgrade-842000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubernetes-upgrade-842000/disk.qcow2
	I0923 04:28:24.653709   20790 main.go:141] libmachine: STDOUT: 
	I0923 04:28:24.653728   20790 main.go:141] libmachine: STDERR: 
	I0923 04:28:24.653820   20790 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubernetes-upgrade-842000/disk.qcow2 +20000M
	I0923 04:28:24.661884   20790 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:28:24.661905   20790 main.go:141] libmachine: STDERR: 
	I0923 04:28:24.661918   20790 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubernetes-upgrade-842000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubernetes-upgrade-842000/disk.qcow2
	I0923 04:28:24.661923   20790 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:28:24.661936   20790 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:28:24.661972   20790 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubernetes-upgrade-842000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubernetes-upgrade-842000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubernetes-upgrade-842000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:b9:f6:f2:48:f8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubernetes-upgrade-842000/disk.qcow2
	I0923 04:28:24.663681   20790 main.go:141] libmachine: STDOUT: 
	I0923 04:28:24.663693   20790 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:28:24.663711   20790 client.go:171] duration metric: took 273.10075ms to LocalClient.Create
	I0923 04:28:26.665887   20790 start.go:128] duration metric: took 2.306658375s to createHost
	I0923 04:28:26.665915   20790 start.go:83] releasing machines lock for "kubernetes-upgrade-842000", held for 2.306830792s
	W0923 04:28:26.666000   20790 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-842000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-842000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:28:26.673964   20790 out.go:201] 
	W0923 04:28:26.677976   20790 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:28:26.677989   20790 out.go:270] * 
	* 
	W0923 04:28:26.678607   20790 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:28:26.689938   20790 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-842000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-842000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-842000: (3.443673834s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-842000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-842000 status --format={{.Host}}: exit status 7 (58.352958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-842000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-842000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.233433625s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-842000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-842000" primary control-plane node in "kubernetes-upgrade-842000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-842000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-842000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:28:30.234529   20844 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:28:30.234653   20844 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:28:30.234657   20844 out.go:358] Setting ErrFile to fd 2...
	I0923 04:28:30.234660   20844 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:28:30.234782   20844 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:28:30.235936   20844 out.go:352] Setting JSON to false
	I0923 04:28:30.252437   20844 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8881,"bootTime":1727082029,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:28:30.252508   20844 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:28:30.256769   20844 out.go:177] * [kubernetes-upgrade-842000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:28:30.276701   20844 notify.go:220] Checking for updates...
	I0923 04:28:30.281732   20844 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 04:28:30.287636   20844 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:28:30.294583   20844 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:28:30.298699   20844 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:28:30.305634   20844 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	I0923 04:28:30.317580   20844 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 04:28:30.321974   20844 config.go:182] Loaded profile config "kubernetes-upgrade-842000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0923 04:28:30.322245   20844 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:28:30.325642   20844 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 04:28:30.332678   20844 start.go:297] selected driver: qemu2
	I0923 04:28:30.332683   20844 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-842000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:28:30.332731   20844 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 04:28:30.335232   20844 cni.go:84] Creating CNI manager for ""
	I0923 04:28:30.335266   20844 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 04:28:30.335299   20844 start.go:340] cluster config:
	{Name:kubernetes-upgrade-842000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-842000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:28:30.339065   20844 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:28:30.347625   20844 out.go:177] * Starting "kubernetes-upgrade-842000" primary control-plane node in "kubernetes-upgrade-842000" cluster
	I0923 04:28:30.350554   20844 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 04:28:30.350572   20844 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 04:28:30.350582   20844 cache.go:56] Caching tarball of preloaded images
	I0923 04:28:30.350660   20844 preload.go:172] Found /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 04:28:30.350666   20844 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 04:28:30.350722   20844 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/kubernetes-upgrade-842000/config.json ...
	I0923 04:28:30.351016   20844 start.go:360] acquireMachinesLock for kubernetes-upgrade-842000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:28:30.351046   20844 start.go:364] duration metric: took 23.458µs to acquireMachinesLock for "kubernetes-upgrade-842000"
	I0923 04:28:30.351057   20844 start.go:96] Skipping create...Using existing machine configuration
	I0923 04:28:30.351062   20844 fix.go:54] fixHost starting: 
	I0923 04:28:30.351187   20844 fix.go:112] recreateIfNeeded on kubernetes-upgrade-842000: state=Stopped err=<nil>
	W0923 04:28:30.351195   20844 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 04:28:30.359471   20844 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-842000" ...
	I0923 04:28:30.363604   20844 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:28:30.363640   20844 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubernetes-upgrade-842000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubernetes-upgrade-842000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubernetes-upgrade-842000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:b9:f6:f2:48:f8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubernetes-upgrade-842000/disk.qcow2
	I0923 04:28:30.365639   20844 main.go:141] libmachine: STDOUT: 
	I0923 04:28:30.365657   20844 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:28:30.365686   20844 fix.go:56] duration metric: took 14.623459ms for fixHost
	I0923 04:28:30.365691   20844 start.go:83] releasing machines lock for "kubernetes-upgrade-842000", held for 14.640458ms
	W0923 04:28:30.365698   20844 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:28:30.365736   20844 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:28:30.365741   20844 start.go:729] Will try again in 5 seconds ...
	I0923 04:28:35.366425   20844 start.go:360] acquireMachinesLock for kubernetes-upgrade-842000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:28:35.366933   20844 start.go:364] duration metric: took 387.666µs to acquireMachinesLock for "kubernetes-upgrade-842000"
	I0923 04:28:35.367093   20844 start.go:96] Skipping create...Using existing machine configuration
	I0923 04:28:35.367112   20844 fix.go:54] fixHost starting: 
	I0923 04:28:35.367885   20844 fix.go:112] recreateIfNeeded on kubernetes-upgrade-842000: state=Stopped err=<nil>
	W0923 04:28:35.367911   20844 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 04:28:35.377834   20844 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-842000" ...
	I0923 04:28:35.383667   20844 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:28:35.383947   20844 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubernetes-upgrade-842000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubernetes-upgrade-842000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubernetes-upgrade-842000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:b9:f6:f2:48:f8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubernetes-upgrade-842000/disk.qcow2
	I0923 04:28:35.393833   20844 main.go:141] libmachine: STDOUT: 
	I0923 04:28:35.393930   20844 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:28:35.394029   20844 fix.go:56] duration metric: took 26.916459ms for fixHost
	I0923 04:28:35.394050   20844 start.go:83] releasing machines lock for "kubernetes-upgrade-842000", held for 27.091875ms
	W0923 04:28:35.394291   20844 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-842000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-842000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:28:35.402600   20844 out.go:201] 
	W0923 04:28:35.414465   20844 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:28:35.414558   20844 out.go:270] * 
	* 
	W0923 04:28:35.417154   20844 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:28:35.426707   20844 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-842000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-842000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-842000 version --output=json: exit status 1 (61.604ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-842000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-09-23 04:28:35.50038 -0700 PDT m=+743.501612043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-842000 -n kubernetes-upgrade-842000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-842000 -n kubernetes-upgrade-842000: exit status 7 (34.601709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-842000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-842000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-842000
--- FAIL: TestKubernetesUpgrade (19.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (585.78s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2152528182 start -p stopped-upgrade-231000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2152528182 start -p stopped-upgrade-231000 --memory=2200 --vm-driver=qemu2 : (51.805852709s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2152528182 -p stopped-upgrade-231000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2152528182 -p stopped-upgrade-231000 stop: (12.110069917s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-231000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-231000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.786818708s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-231000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-231000" primary control-plane node in "stopped-upgrade-231000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-231000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:29:29.621278   20906 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:29:29.621435   20906 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:29:29.621439   20906 out.go:358] Setting ErrFile to fd 2...
	I0923 04:29:29.621442   20906 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:29:29.621631   20906 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:29:29.622780   20906 out.go:352] Setting JSON to false
	I0923 04:29:29.642452   20906 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8940,"bootTime":1727082029,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:29:29.642519   20906 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:29:29.647775   20906 out.go:177] * [stopped-upgrade-231000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:29:29.655779   20906 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 04:29:29.655852   20906 notify.go:220] Checking for updates...
	I0923 04:29:29.663640   20906 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:29:29.667718   20906 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:29:29.671696   20906 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:29:29.674667   20906 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	I0923 04:29:29.677732   20906 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 04:29:29.680981   20906 config.go:182] Loaded profile config "stopped-upgrade-231000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 04:29:29.684690   20906 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0923 04:29:29.687728   20906 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:29:29.690705   20906 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 04:29:29.697697   20906 start.go:297] selected driver: qemu2
	I0923 04:29:29.697704   20906 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-231000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53273 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-231000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0923 04:29:29.697775   20906 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 04:29:29.700554   20906 cni.go:84] Creating CNI manager for ""
	I0923 04:29:29.700585   20906 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 04:29:29.700615   20906 start.go:340] cluster config:
	{Name:stopped-upgrade-231000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53273 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-231000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0923 04:29:29.700671   20906 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:29:29.707704   20906 out.go:177] * Starting "stopped-upgrade-231000" primary control-plane node in "stopped-upgrade-231000" cluster
	I0923 04:29:29.711741   20906 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0923 04:29:29.711762   20906 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0923 04:29:29.711770   20906 cache.go:56] Caching tarball of preloaded images
	I0923 04:29:29.711832   20906 preload.go:172] Found /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 04:29:29.711840   20906 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0923 04:29:29.711899   20906 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/stopped-upgrade-231000/config.json ...
	I0923 04:29:29.712375   20906 start.go:360] acquireMachinesLock for stopped-upgrade-231000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:29:29.712407   20906 start.go:364] duration metric: took 24.417µs to acquireMachinesLock for "stopped-upgrade-231000"
	I0923 04:29:29.712418   20906 start.go:96] Skipping create...Using existing machine configuration
	I0923 04:29:29.712423   20906 fix.go:54] fixHost starting: 
	I0923 04:29:29.712546   20906 fix.go:112] recreateIfNeeded on stopped-upgrade-231000: state=Stopped err=<nil>
	W0923 04:29:29.712555   20906 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 04:29:29.720728   20906 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-231000" ...
	I0923 04:29:29.724679   20906 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:29:29.724772   20906 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/stopped-upgrade-231000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/stopped-upgrade-231000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/stopped-upgrade-231000/qemu.pid -nic user,model=virtio,hostfwd=tcp::53238-:22,hostfwd=tcp::53239-:2376,hostname=stopped-upgrade-231000 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/stopped-upgrade-231000/disk.qcow2
	I0923 04:29:29.775863   20906 main.go:141] libmachine: STDOUT: 
	I0923 04:29:29.775892   20906 main.go:141] libmachine: STDERR: 
	I0923 04:29:29.775899   20906 main.go:141] libmachine: Waiting for VM to start (ssh -p 53238 docker@127.0.0.1)...
	I0923 04:29:49.628407   20906 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/stopped-upgrade-231000/config.json ...
	I0923 04:29:49.628607   20906 machine.go:93] provisionDockerMachine start ...
	I0923 04:29:49.628654   20906 main.go:141] libmachine: Using SSH client type: native
	I0923 04:29:49.628789   20906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f1dc00] 0x102f20440 <nil>  [] 0s} localhost 53238 <nil> <nil>}
	I0923 04:29:49.628794   20906 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 04:29:49.682740   20906 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0923 04:29:49.682758   20906 buildroot.go:166] provisioning hostname "stopped-upgrade-231000"
	I0923 04:29:49.682838   20906 main.go:141] libmachine: Using SSH client type: native
	I0923 04:29:49.682952   20906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f1dc00] 0x102f20440 <nil>  [] 0s} localhost 53238 <nil> <nil>}
	I0923 04:29:49.682961   20906 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-231000 && echo "stopped-upgrade-231000" | sudo tee /etc/hostname
	I0923 04:29:49.740329   20906 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-231000
	
	I0923 04:29:49.740385   20906 main.go:141] libmachine: Using SSH client type: native
	I0923 04:29:49.740514   20906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f1dc00] 0x102f20440 <nil>  [] 0s} localhost 53238 <nil> <nil>}
	I0923 04:29:49.740522   20906 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-231000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-231000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-231000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 04:29:49.801451   20906 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 04:29:49.801465   20906 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19690-18362/.minikube CaCertPath:/Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19690-18362/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19690-18362/.minikube}
	I0923 04:29:49.801479   20906 buildroot.go:174] setting up certificates
	I0923 04:29:49.801484   20906 provision.go:84] configureAuth start
	I0923 04:29:49.801488   20906 provision.go:143] copyHostCerts
	I0923 04:29:49.801580   20906 exec_runner.go:144] found /Users/jenkins/minikube-integration/19690-18362/.minikube/key.pem, removing ...
	I0923 04:29:49.801587   20906 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19690-18362/.minikube/key.pem
	I0923 04:29:49.801697   20906 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19690-18362/.minikube/key.pem (1675 bytes)
	I0923 04:29:49.801865   20906 exec_runner.go:144] found /Users/jenkins/minikube-integration/19690-18362/.minikube/ca.pem, removing ...
	I0923 04:29:49.801870   20906 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19690-18362/.minikube/ca.pem
	I0923 04:29:49.801920   20906 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19690-18362/.minikube/ca.pem (1078 bytes)
	I0923 04:29:49.802049   20906 exec_runner.go:144] found /Users/jenkins/minikube-integration/19690-18362/.minikube/cert.pem, removing ...
	I0923 04:29:49.802054   20906 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19690-18362/.minikube/cert.pem
	I0923 04:29:49.802107   20906 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19690-18362/.minikube/cert.pem (1123 bytes)
	I0923 04:29:49.802201   20906 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-231000 san=[127.0.0.1 localhost minikube stopped-upgrade-231000]
	I0923 04:29:50.004852   20906 provision.go:177] copyRemoteCerts
	I0923 04:29:50.004911   20906 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 04:29:50.004921   20906 sshutil.go:53] new ssh client: &{IP:localhost Port:53238 SSHKeyPath:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/stopped-upgrade-231000/id_rsa Username:docker}
	I0923 04:29:50.035162   20906 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 04:29:50.042466   20906 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0923 04:29:50.048968   20906 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 04:29:50.055827   20906 provision.go:87] duration metric: took 254.332916ms to configureAuth
	I0923 04:29:50.055835   20906 buildroot.go:189] setting minikube options for container-runtime
	I0923 04:29:50.055946   20906 config.go:182] Loaded profile config "stopped-upgrade-231000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 04:29:50.055992   20906 main.go:141] libmachine: Using SSH client type: native
	I0923 04:29:50.056086   20906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f1dc00] 0x102f20440 <nil>  [] 0s} localhost 53238 <nil> <nil>}
	I0923 04:29:50.056091   20906 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 04:29:50.111877   20906 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0923 04:29:50.111888   20906 buildroot.go:70] root file system type: tmpfs
	I0923 04:29:50.111944   20906 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 04:29:50.111998   20906 main.go:141] libmachine: Using SSH client type: native
	I0923 04:29:50.112105   20906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f1dc00] 0x102f20440 <nil>  [] 0s} localhost 53238 <nil> <nil>}
	I0923 04:29:50.112140   20906 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 04:29:50.173544   20906 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 04:29:50.173615   20906 main.go:141] libmachine: Using SSH client type: native
	I0923 04:29:50.173727   20906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f1dc00] 0x102f20440 <nil>  [] 0s} localhost 53238 <nil> <nil>}
	I0923 04:29:50.173738   20906 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 04:29:50.532169   20906 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0923 04:29:50.532186   20906 machine.go:96] duration metric: took 903.577917ms to provisionDockerMachine
	I0923 04:29:50.532194   20906 start.go:293] postStartSetup for "stopped-upgrade-231000" (driver="qemu2")
	I0923 04:29:50.532200   20906 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 04:29:50.532261   20906 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 04:29:50.532270   20906 sshutil.go:53] new ssh client: &{IP:localhost Port:53238 SSHKeyPath:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/stopped-upgrade-231000/id_rsa Username:docker}
	I0923 04:29:50.564602   20906 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 04:29:50.565979   20906 info.go:137] Remote host: Buildroot 2021.02.12
	I0923 04:29:50.565989   20906 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19690-18362/.minikube/addons for local assets ...
	I0923 04:29:50.566098   20906 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19690-18362/.minikube/files for local assets ...
	I0923 04:29:50.566226   20906 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19690-18362/.minikube/files/etc/ssl/certs/189142.pem -> 189142.pem in /etc/ssl/certs
	I0923 04:29:50.566361   20906 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 04:29:50.569720   20906 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19690-18362/.minikube/files/etc/ssl/certs/189142.pem --> /etc/ssl/certs/189142.pem (1708 bytes)
	I0923 04:29:50.577420   20906 start.go:296] duration metric: took 45.217958ms for postStartSetup
	I0923 04:29:50.577441   20906 fix.go:56] duration metric: took 20.86511375s for fixHost
	I0923 04:29:50.577497   20906 main.go:141] libmachine: Using SSH client type: native
	I0923 04:29:50.577630   20906 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f1dc00] 0x102f20440 <nil>  [] 0s} localhost 53238 <nil> <nil>}
	I0923 04:29:50.577636   20906 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 04:29:50.636388   20906 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727090991.061775921
	
	I0923 04:29:50.636399   20906 fix.go:216] guest clock: 1727090991.061775921
	I0923 04:29:50.636404   20906 fix.go:229] Guest: 2024-09-23 04:29:51.061775921 -0700 PDT Remote: 2024-09-23 04:29:50.577443 -0700 PDT m=+20.986053543 (delta=484.332921ms)
	I0923 04:29:50.636416   20906 fix.go:200] guest clock delta is within tolerance: 484.332921ms
	I0923 04:29:50.636419   20906 start.go:83] releasing machines lock for "stopped-upgrade-231000", held for 20.924101875s
	I0923 04:29:50.636500   20906 ssh_runner.go:195] Run: cat /version.json
	I0923 04:29:50.636511   20906 sshutil.go:53] new ssh client: &{IP:localhost Port:53238 SSHKeyPath:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/stopped-upgrade-231000/id_rsa Username:docker}
	I0923 04:29:50.636500   20906 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 04:29:50.636556   20906 sshutil.go:53] new ssh client: &{IP:localhost Port:53238 SSHKeyPath:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/stopped-upgrade-231000/id_rsa Username:docker}
	W0923 04:29:50.637261   20906 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:53476->127.0.0.1:53238: write: broken pipe
	I0923 04:29:50.637281   20906 retry.go:31] will retry after 204.357413ms: ssh: handshake failed: write tcp 127.0.0.1:53476->127.0.0.1:53238: write: broken pipe
	W0923 04:29:50.871776   20906 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0923 04:29:50.871840   20906 ssh_runner.go:195] Run: systemctl --version
	I0923 04:29:50.873686   20906 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 04:29:50.875384   20906 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 04:29:50.875426   20906 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0923 04:29:50.878242   20906 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0923 04:29:50.882903   20906 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 04:29:50.882913   20906 start.go:495] detecting cgroup driver to use...
	I0923 04:29:50.883034   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 04:29:50.890412   20906 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0923 04:29:50.894181   20906 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 04:29:50.897831   20906 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 04:29:50.897878   20906 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 04:29:50.901936   20906 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 04:29:50.905800   20906 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 04:29:50.909531   20906 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 04:29:50.913193   20906 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 04:29:50.917102   20906 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 04:29:50.921427   20906 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 04:29:50.925632   20906 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 04:29:50.929155   20906 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 04:29:50.932501   20906 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 04:29:50.935125   20906 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 04:29:50.998334   20906 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 04:29:51.009371   20906 start.go:495] detecting cgroup driver to use...
	I0923 04:29:51.009430   20906 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 04:29:51.014393   20906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 04:29:51.019535   20906 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 04:29:51.028022   20906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 04:29:51.032118   20906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 04:29:51.036268   20906 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0923 04:29:51.097384   20906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 04:29:51.102527   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 04:29:51.108166   20906 ssh_runner.go:195] Run: which cri-dockerd
	I0923 04:29:51.109360   20906 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 04:29:51.112433   20906 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0923 04:29:51.117107   20906 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 04:29:51.203149   20906 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 04:29:51.283409   20906 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 04:29:51.283468   20906 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 04:29:51.288990   20906 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 04:29:51.370191   20906 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 04:29:52.496879   20906 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.126675708s)
	I0923 04:29:52.496954   20906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 04:29:52.501950   20906 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0923 04:29:52.508198   20906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 04:29:52.512629   20906 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0923 04:29:52.588764   20906 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0923 04:29:52.674028   20906 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 04:29:52.754056   20906 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0923 04:29:52.760860   20906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 04:29:52.765408   20906 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 04:29:52.848427   20906 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0923 04:29:52.891525   20906 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 04:29:52.891623   20906 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0923 04:29:52.893566   20906 start.go:563] Will wait 60s for crictl version
	I0923 04:29:52.893620   20906 ssh_runner.go:195] Run: which crictl
	I0923 04:29:52.895149   20906 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 04:29:52.912301   20906 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0923 04:29:52.912389   20906 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 04:29:52.929599   20906 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 04:29:52.951291   20906 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0923 04:29:52.951403   20906 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0923 04:29:52.952888   20906 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 04:29:52.956838   20906 kubeadm.go:883] updating cluster {Name:stopped-upgrade-231000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53273 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-231000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0923 04:29:52.956896   20906 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0923 04:29:52.956950   20906 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 04:29:52.967941   20906 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 04:29:52.967950   20906 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0923 04:29:52.968007   20906 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0923 04:29:52.971193   20906 ssh_runner.go:195] Run: which lz4
	I0923 04:29:52.972488   20906 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0923 04:29:52.973793   20906 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0923 04:29:52.973805   20906 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0923 04:29:53.937359   20906 docker.go:649] duration metric: took 964.919375ms to copy over tarball
	I0923 04:29:53.937425   20906 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0923 04:29:55.113610   20906 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.176166458s)
	I0923 04:29:55.113624   20906 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0923 04:29:55.130215   20906 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0923 04:29:55.133499   20906 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0923 04:29:55.138113   20906 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 04:29:55.202159   20906 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 04:29:56.935930   20906 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.733759458s)
	I0923 04:29:56.936038   20906 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 04:29:56.947027   20906 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 04:29:56.947039   20906 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0923 04:29:56.947045   20906 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0923 04:29:56.952277   20906 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 04:29:56.954969   20906 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0923 04:29:56.957367   20906 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0923 04:29:56.957734   20906 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 04:29:56.959779   20906 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0923 04:29:56.959910   20906 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0923 04:29:56.961207   20906 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0923 04:29:56.961288   20906 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0923 04:29:56.962428   20906 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0923 04:29:56.962466   20906 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0923 04:29:56.963060   20906 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0923 04:29:56.963589   20906 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0923 04:29:56.964405   20906 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0923 04:29:56.965344   20906 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0923 04:29:56.965382   20906 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0923 04:29:56.965825   20906 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0923 04:29:57.349123   20906 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0923 04:29:57.360088   20906 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0923 04:29:57.360115   20906 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0923 04:29:57.360186   20906 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0923 04:29:57.366656   20906 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0923 04:29:57.370725   20906 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0923 04:29:57.372642   20906 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0923 04:29:57.379338   20906 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0923 04:29:57.383163   20906 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0923 04:29:57.383182   20906 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0923 04:29:57.383262   20906 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0923 04:29:57.383652   20906 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0923 04:29:57.383665   20906 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0923 04:29:57.383703   20906 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0923 04:29:57.399755   20906 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0923 04:29:57.399774   20906 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0923 04:29:57.399784   20906 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0923 04:29:57.399830   20906 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0923 04:29:57.400849   20906 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0923 04:29:57.411206   20906 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0923 04:29:57.414545   20906 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0923 04:29:57.425218   20906 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0923 04:29:57.425242   20906 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0923 04:29:57.425308   20906 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0923 04:29:57.435805   20906 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0923 04:29:57.450469   20906 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0923 04:29:57.461332   20906 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0923 04:29:57.461355   20906 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0923 04:29:57.461428   20906 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0923 04:29:57.472583   20906 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0923 04:29:57.472712   20906 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0923 04:29:57.474630   20906 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0923 04:29:57.474647   20906 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0923 04:29:57.483227   20906 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0923 04:29:57.483239   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0923 04:29:57.489250   20906 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0923 04:29:57.489404   20906 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0923 04:29:57.517722   20906 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0923 04:29:57.517827   20906 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0923 04:29:57.517851   20906 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0923 04:29:57.517927   20906 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0923 04:29:57.528757   20906 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0923 04:29:57.528900   20906 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0923 04:29:57.530506   20906 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0923 04:29:57.530524   20906 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0923 04:29:57.574773   20906 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0923 04:29:57.574788   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0923 04:29:57.616621   20906 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0923 04:29:57.828378   20906 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0923 04:29:57.828527   20906 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 04:29:57.839248   20906 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0923 04:29:57.839272   20906 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 04:29:57.839337   20906 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 04:29:57.853538   20906 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0923 04:29:57.853669   20906 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0923 04:29:57.855145   20906 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0923 04:29:57.855155   20906 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0923 04:29:57.887473   20906 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0923 04:29:57.887488   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0923 04:29:58.131318   20906 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0923 04:29:58.131368   20906 cache_images.go:92] duration metric: took 1.184319833s to LoadCachedImages
	W0923 04:29:58.131411   20906 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0923 04:29:58.131419   20906 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0923 04:29:58.131478   20906 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-231000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-231000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 04:29:58.131577   20906 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0923 04:29:58.148328   20906 cni.go:84] Creating CNI manager for ""
	I0923 04:29:58.148340   20906 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 04:29:58.148345   20906 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 04:29:58.148354   20906 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-231000 NodeName:stopped-upgrade-231000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 04:29:58.148437   20906 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-231000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 04:29:58.148497   20906 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0923 04:29:58.151652   20906 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 04:29:58.151695   20906 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 04:29:58.154480   20906 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0923 04:29:58.159738   20906 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 04:29:58.164900   20906 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0923 04:29:58.170549   20906 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0923 04:29:58.172095   20906 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 04:29:58.175778   20906 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 04:29:58.256373   20906 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 04:29:58.262129   20906 certs.go:68] Setting up /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/stopped-upgrade-231000 for IP: 10.0.2.15
	I0923 04:29:58.262155   20906 certs.go:194] generating shared ca certs ...
	I0923 04:29:58.262165   20906 certs.go:226] acquiring lock for ca certs: {Name:mkf84bedb9b35f23af77b237ccfe7d150b52a82b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 04:29:58.262440   20906 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19690-18362/.minikube/ca.key
	I0923 04:29:58.262491   20906 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19690-18362/.minikube/proxy-client-ca.key
	I0923 04:29:58.262498   20906 certs.go:256] generating profile certs ...
	I0923 04:29:58.262581   20906 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/stopped-upgrade-231000/client.key
	I0923 04:29:58.262595   20906 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/stopped-upgrade-231000/apiserver.key.0be955ec
	I0923 04:29:58.262607   20906 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/stopped-upgrade-231000/apiserver.crt.0be955ec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0923 04:29:58.435039   20906 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/stopped-upgrade-231000/apiserver.crt.0be955ec ...
	I0923 04:29:58.435060   20906 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/stopped-upgrade-231000/apiserver.crt.0be955ec: {Name:mk6a472e58080bfd0970e58c00a05abc94b44f06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 04:29:58.435367   20906 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/stopped-upgrade-231000/apiserver.key.0be955ec ...
	I0923 04:29:58.435374   20906 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/stopped-upgrade-231000/apiserver.key.0be955ec: {Name:mk77dd9d88f71f3e7de565326d78302c361ce955 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 04:29:58.435505   20906 certs.go:381] copying /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/stopped-upgrade-231000/apiserver.crt.0be955ec -> /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/stopped-upgrade-231000/apiserver.crt
	I0923 04:29:58.435644   20906 certs.go:385] copying /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/stopped-upgrade-231000/apiserver.key.0be955ec -> /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/stopped-upgrade-231000/apiserver.key
	I0923 04:29:58.435804   20906 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/stopped-upgrade-231000/proxy-client.key
	I0923 04:29:58.435947   20906 certs.go:484] found cert: /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/18914.pem (1338 bytes)
	W0923 04:29:58.435982   20906 certs.go:480] ignoring /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/18914_empty.pem, impossibly tiny 0 bytes
	I0923 04:29:58.435989   20906 certs.go:484] found cert: /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca-key.pem (1675 bytes)
	I0923 04:29:58.436009   20906 certs.go:484] found cert: /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem (1078 bytes)
	I0923 04:29:58.436029   20906 certs.go:484] found cert: /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem (1123 bytes)
	I0923 04:29:58.436051   20906 certs.go:484] found cert: /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/key.pem (1675 bytes)
	I0923 04:29:58.436096   20906 certs.go:484] found cert: /Users/jenkins/minikube-integration/19690-18362/.minikube/files/etc/ssl/certs/189142.pem (1708 bytes)
	I0923 04:29:58.436534   20906 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19690-18362/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 04:29:58.445199   20906 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19690-18362/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 04:29:58.453130   20906 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19690-18362/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 04:29:58.465938   20906 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19690-18362/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 04:29:58.474273   20906 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/stopped-upgrade-231000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0923 04:29:58.481706   20906 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/stopped-upgrade-231000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0923 04:29:58.490205   20906 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/stopped-upgrade-231000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 04:29:58.498557   20906 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/stopped-upgrade-231000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 04:29:58.506106   20906 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19690-18362/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 04:29:58.513824   20906 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/18914.pem --> /usr/share/ca-certificates/18914.pem (1338 bytes)
	I0923 04:29:58.521859   20906 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19690-18362/.minikube/files/etc/ssl/certs/189142.pem --> /usr/share/ca-certificates/189142.pem (1708 bytes)
	I0923 04:29:58.529978   20906 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 04:29:58.536106   20906 ssh_runner.go:195] Run: openssl version
	I0923 04:29:58.538435   20906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 04:29:58.542429   20906 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 04:29:58.544706   20906 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 11:29 /usr/share/ca-certificates/minikubeCA.pem
	I0923 04:29:58.544769   20906 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 04:29:58.547270   20906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 04:29:58.550784   20906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18914.pem && ln -fs /usr/share/ca-certificates/18914.pem /etc/ssl/certs/18914.pem"
	I0923 04:29:58.554194   20906 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18914.pem
	I0923 04:29:58.555886   20906 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 11:17 /usr/share/ca-certificates/18914.pem
	I0923 04:29:58.555926   20906 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18914.pem
	I0923 04:29:58.557928   20906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18914.pem /etc/ssl/certs/51391683.0"
	I0923 04:29:58.561422   20906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/189142.pem && ln -fs /usr/share/ca-certificates/189142.pem /etc/ssl/certs/189142.pem"
	I0923 04:29:58.565535   20906 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/189142.pem
	I0923 04:29:58.567719   20906 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 11:17 /usr/share/ca-certificates/189142.pem
	I0923 04:29:58.567770   20906 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/189142.pem
	I0923 04:29:58.570198   20906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/189142.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 04:29:58.574065   20906 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 04:29:58.576056   20906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0923 04:29:58.579533   20906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0923 04:29:58.581705   20906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0923 04:29:58.585103   20906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0923 04:29:58.587010   20906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0923 04:29:58.588800   20906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0923 04:29:58.590715   20906 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-231000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53273 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-231000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0923 04:29:58.590788   20906 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 04:29:58.600846   20906 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 04:29:58.603921   20906 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0923 04:29:58.603930   20906 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0923 04:29:58.603958   20906 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0923 04:29:58.606635   20906 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0923 04:29:58.606671   20906 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-231000" does not appear in /Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:29:58.606686   20906 kubeconfig.go:62] /Users/jenkins/minikube-integration/19690-18362/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-231000" cluster setting kubeconfig missing "stopped-upgrade-231000" context setting]
	I0923 04:29:58.606848   20906 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19690-18362/kubeconfig: {Name:mke35d42fdea9892a3eb00f2ea9c8fc1f44681bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 04:29:58.607499   20906 kapi.go:59] client config for stopped-upgrade-231000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/stopped-upgrade-231000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/stopped-upgrade-231000/client.key", CAFile:"/Users/jenkins/minikube-integration/19690-18362/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1044f6030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 04:29:58.608432   20906 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0923 04:29:58.610960   20906 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-231000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0923 04:29:58.610966   20906 kubeadm.go:1160] stopping kube-system containers ...
	I0923 04:29:58.611014   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 04:29:58.621772   20906 docker.go:483] Stopping containers: [878adc2fd22f 280cd681106d 3ce2ac2fc4e2 2143f864eac3 e30e415fd402 7eb61d38585e f7e276d79075 c2031aedf91f]
	I0923 04:29:58.621838   20906 ssh_runner.go:195] Run: docker stop 878adc2fd22f 280cd681106d 3ce2ac2fc4e2 2143f864eac3 e30e415fd402 7eb61d38585e f7e276d79075 c2031aedf91f
	I0923 04:29:58.632497   20906 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0923 04:29:58.638106   20906 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 04:29:58.641342   20906 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 04:29:58.641348   20906 kubeadm.go:157] found existing configuration files:
	
	I0923 04:29:58.641378   20906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53273 /etc/kubernetes/admin.conf
	I0923 04:29:58.644204   20906 kubeadm.go:163] "https://control-plane.minikube.internal:53273" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53273 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 04:29:58.644230   20906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 04:29:58.646664   20906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53273 /etc/kubernetes/kubelet.conf
	I0923 04:29:58.649452   20906 kubeadm.go:163] "https://control-plane.minikube.internal:53273" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53273 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 04:29:58.649478   20906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 04:29:58.652474   20906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53273 /etc/kubernetes/controller-manager.conf
	I0923 04:29:58.654872   20906 kubeadm.go:163] "https://control-plane.minikube.internal:53273" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53273 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 04:29:58.654898   20906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 04:29:58.657815   20906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53273 /etc/kubernetes/scheduler.conf
	I0923 04:29:58.660840   20906 kubeadm.go:163] "https://control-plane.minikube.internal:53273" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53273 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 04:29:58.660863   20906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 04:29:58.663532   20906 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 04:29:58.666272   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 04:29:58.688320   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 04:29:59.180338   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0923 04:29:59.314849   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 04:29:59.335680   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0923 04:29:59.356498   20906 api_server.go:52] waiting for apiserver process to appear ...
	I0923 04:29:59.356588   20906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 04:29:59.859023   20906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 04:30:00.358684   20906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 04:30:00.858633   20906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 04:30:00.867124   20906 api_server.go:72] duration metric: took 1.510635166s to wait for apiserver process to appear ...
	I0923 04:30:00.867134   20906 api_server.go:88] waiting for apiserver healthz status ...
	I0923 04:30:00.867144   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:30:05.869256   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:30:05.869330   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:30:10.869614   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:30:10.869645   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:30:15.869995   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:30:15.870087   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:30:20.870628   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:30:20.870651   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:30:25.871253   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:30:25.871323   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:30:30.872266   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:30:30.872338   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:30:35.873519   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:30:35.873541   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:30:40.874862   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:30:40.874924   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:30:45.877000   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:30:45.877077   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:30:50.879627   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:30:50.879677   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:30:55.882000   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:30:55.882043   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:31:00.884315   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:31:00.884537   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:31:00.907895   20906 logs.go:276] 2 containers: [ba4be4c164d3 878adc2fd22f]
	I0923 04:31:00.908035   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:31:00.923527   20906 logs.go:276] 2 containers: [540584041ca0 3ce2ac2fc4e2]
	I0923 04:31:00.923626   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:31:00.936376   20906 logs.go:276] 1 containers: [4b25be594587]
	I0923 04:31:00.936475   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:31:00.947194   20906 logs.go:276] 2 containers: [ee52514e5b96 2143f864eac3]
	I0923 04:31:00.947284   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:31:00.957121   20906 logs.go:276] 1 containers: [54a06cb05e6b]
	I0923 04:31:00.957205   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:31:00.967749   20906 logs.go:276] 2 containers: [417011b5ad54 280cd681106d]
	I0923 04:31:00.967828   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:31:00.978233   20906 logs.go:276] 0 containers: []
	W0923 04:31:00.978250   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:31:00.978320   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:31:00.992537   20906 logs.go:276] 2 containers: [f0829c6e70e7 7b391cd17bc7]
	I0923 04:31:00.992564   20906 logs.go:123] Gathering logs for kube-proxy [54a06cb05e6b] ...
	I0923 04:31:00.992569   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54a06cb05e6b"
	I0923 04:31:01.011681   20906 logs.go:123] Gathering logs for kube-controller-manager [280cd681106d] ...
	I0923 04:31:01.011691   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 280cd681106d"
	I0923 04:31:01.023937   20906 logs.go:123] Gathering logs for storage-provisioner [7b391cd17bc7] ...
	I0923 04:31:01.023947   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b391cd17bc7"
	I0923 04:31:01.035313   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:31:01.035324   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:31:01.047027   20906 logs.go:123] Gathering logs for coredns [4b25be594587] ...
	I0923 04:31:01.047038   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b25be594587"
	I0923 04:31:01.062155   20906 logs.go:123] Gathering logs for kube-scheduler [ee52514e5b96] ...
	I0923 04:31:01.062166   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee52514e5b96"
	I0923 04:31:01.073715   20906 logs.go:123] Gathering logs for kube-scheduler [2143f864eac3] ...
	I0923 04:31:01.073726   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2143f864eac3"
	I0923 04:31:01.089785   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:31:01.089801   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:31:01.094260   20906 logs.go:123] Gathering logs for etcd [3ce2ac2fc4e2] ...
	I0923 04:31:01.094267   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce2ac2fc4e2"
	I0923 04:31:01.109721   20906 logs.go:123] Gathering logs for kube-apiserver [878adc2fd22f] ...
	I0923 04:31:01.109730   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878adc2fd22f"
	I0923 04:31:01.136400   20906 logs.go:123] Gathering logs for etcd [540584041ca0] ...
	I0923 04:31:01.136411   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 540584041ca0"
	I0923 04:31:01.156354   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:31:01.156369   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:31:01.181773   20906 logs.go:123] Gathering logs for kube-controller-manager [417011b5ad54] ...
	I0923 04:31:01.181788   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 417011b5ad54"
	I0923 04:31:01.199564   20906 logs.go:123] Gathering logs for storage-provisioner [f0829c6e70e7] ...
	I0923 04:31:01.199574   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0829c6e70e7"
	I0923 04:31:01.211741   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:31:01.211751   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:31:01.249509   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:31:01.249520   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:31:01.369478   20906 logs.go:123] Gathering logs for kube-apiserver [ba4be4c164d3] ...
	I0923 04:31:01.369492   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4be4c164d3"
	I0923 04:31:03.884592   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:31:08.886836   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:31:08.887286   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:31:08.922702   20906 logs.go:276] 2 containers: [ba4be4c164d3 878adc2fd22f]
	I0923 04:31:08.922872   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:31:08.943632   20906 logs.go:276] 2 containers: [540584041ca0 3ce2ac2fc4e2]
	I0923 04:31:08.943751   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:31:08.962729   20906 logs.go:276] 1 containers: [4b25be594587]
	I0923 04:31:08.962813   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:31:08.974404   20906 logs.go:276] 2 containers: [ee52514e5b96 2143f864eac3]
	I0923 04:31:08.974483   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:31:08.986425   20906 logs.go:276] 1 containers: [54a06cb05e6b]
	I0923 04:31:08.986512   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:31:08.997812   20906 logs.go:276] 2 containers: [417011b5ad54 280cd681106d]
	I0923 04:31:08.997886   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:31:09.008674   20906 logs.go:276] 0 containers: []
	W0923 04:31:09.008692   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:31:09.008772   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:31:09.019242   20906 logs.go:276] 2 containers: [f0829c6e70e7 7b391cd17bc7]
	I0923 04:31:09.019264   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:31:09.019269   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:31:09.023794   20906 logs.go:123] Gathering logs for kube-apiserver [878adc2fd22f] ...
	I0923 04:31:09.023801   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878adc2fd22f"
	I0923 04:31:09.048614   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:31:09.048629   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:31:09.060687   20906 logs.go:123] Gathering logs for kube-apiserver [ba4be4c164d3] ...
	I0923 04:31:09.060698   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4be4c164d3"
	I0923 04:31:09.076049   20906 logs.go:123] Gathering logs for kube-controller-manager [417011b5ad54] ...
	I0923 04:31:09.076062   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 417011b5ad54"
	I0923 04:31:09.093854   20906 logs.go:123] Gathering logs for storage-provisioner [7b391cd17bc7] ...
	I0923 04:31:09.093866   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b391cd17bc7"
	I0923 04:31:09.105406   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:31:09.105418   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:31:09.143615   20906 logs.go:123] Gathering logs for etcd [3ce2ac2fc4e2] ...
	I0923 04:31:09.143630   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce2ac2fc4e2"
	I0923 04:31:09.158899   20906 logs.go:123] Gathering logs for coredns [4b25be594587] ...
	I0923 04:31:09.158910   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b25be594587"
	I0923 04:31:09.170576   20906 logs.go:123] Gathering logs for kube-scheduler [ee52514e5b96] ...
	I0923 04:31:09.170589   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee52514e5b96"
	I0923 04:31:09.182538   20906 logs.go:123] Gathering logs for kube-proxy [54a06cb05e6b] ...
	I0923 04:31:09.182551   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54a06cb05e6b"
	I0923 04:31:09.193886   20906 logs.go:123] Gathering logs for kube-controller-manager [280cd681106d] ...
	I0923 04:31:09.193898   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 280cd681106d"
	I0923 04:31:09.206027   20906 logs.go:123] Gathering logs for storage-provisioner [f0829c6e70e7] ...
	I0923 04:31:09.206037   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0829c6e70e7"
	I0923 04:31:09.217660   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:31:09.217672   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:31:09.253899   20906 logs.go:123] Gathering logs for etcd [540584041ca0] ...
	I0923 04:31:09.253908   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 540584041ca0"
	I0923 04:31:09.268441   20906 logs.go:123] Gathering logs for kube-scheduler [2143f864eac3] ...
	I0923 04:31:09.268451   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2143f864eac3"
	I0923 04:31:09.284248   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:31:09.284261   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:31:11.811549   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:31:16.813901   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:31:16.814298   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:31:16.841422   20906 logs.go:276] 2 containers: [ba4be4c164d3 878adc2fd22f]
	I0923 04:31:16.841550   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:31:16.858308   20906 logs.go:276] 2 containers: [540584041ca0 3ce2ac2fc4e2]
	I0923 04:31:16.858415   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:31:16.871952   20906 logs.go:276] 1 containers: [4b25be594587]
	I0923 04:31:16.872044   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:31:16.883926   20906 logs.go:276] 2 containers: [ee52514e5b96 2143f864eac3]
	I0923 04:31:16.883998   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:31:16.894290   20906 logs.go:276] 1 containers: [54a06cb05e6b]
	I0923 04:31:16.894355   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:31:16.904489   20906 logs.go:276] 2 containers: [417011b5ad54 280cd681106d]
	I0923 04:31:16.904551   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:31:16.914640   20906 logs.go:276] 0 containers: []
	W0923 04:31:16.914651   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:31:16.914710   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:31:16.928014   20906 logs.go:276] 2 containers: [f0829c6e70e7 7b391cd17bc7]
	I0923 04:31:16.928033   20906 logs.go:123] Gathering logs for kube-scheduler [ee52514e5b96] ...
	I0923 04:31:16.928038   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee52514e5b96"
	I0923 04:31:16.940611   20906 logs.go:123] Gathering logs for kube-controller-manager [280cd681106d] ...
	I0923 04:31:16.940620   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 280cd681106d"
	I0923 04:31:16.956917   20906 logs.go:123] Gathering logs for coredns [4b25be594587] ...
	I0923 04:31:16.956926   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b25be594587"
	I0923 04:31:16.968319   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:31:16.968329   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:31:17.008787   20906 logs.go:123] Gathering logs for kube-apiserver [878adc2fd22f] ...
	I0923 04:31:17.008804   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878adc2fd22f"
	I0923 04:31:17.033722   20906 logs.go:123] Gathering logs for etcd [540584041ca0] ...
	I0923 04:31:17.033732   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 540584041ca0"
	I0923 04:31:17.052256   20906 logs.go:123] Gathering logs for kube-scheduler [2143f864eac3] ...
	I0923 04:31:17.052265   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2143f864eac3"
	I0923 04:31:17.067787   20906 logs.go:123] Gathering logs for kube-proxy [54a06cb05e6b] ...
	I0923 04:31:17.067803   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54a06cb05e6b"
	I0923 04:31:17.079710   20906 logs.go:123] Gathering logs for kube-controller-manager [417011b5ad54] ...
	I0923 04:31:17.079725   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 417011b5ad54"
	I0923 04:31:17.097077   20906 logs.go:123] Gathering logs for storage-provisioner [f0829c6e70e7] ...
	I0923 04:31:17.097087   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0829c6e70e7"
	I0923 04:31:17.108501   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:31:17.108512   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:31:17.112909   20906 logs.go:123] Gathering logs for kube-apiserver [ba4be4c164d3] ...
	I0923 04:31:17.112916   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4be4c164d3"
	I0923 04:31:17.126690   20906 logs.go:123] Gathering logs for etcd [3ce2ac2fc4e2] ...
	I0923 04:31:17.126704   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce2ac2fc4e2"
	I0923 04:31:17.140834   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:31:17.140845   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:31:17.152589   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:31:17.152601   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:31:17.190306   20906 logs.go:123] Gathering logs for storage-provisioner [7b391cd17bc7] ...
	I0923 04:31:17.190316   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b391cd17bc7"
	I0923 04:31:17.201765   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:31:17.201775   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:31:19.728032   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:31:24.730342   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:31:24.730531   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:31:24.743146   20906 logs.go:276] 2 containers: [ba4be4c164d3 878adc2fd22f]
	I0923 04:31:24.743242   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:31:24.757597   20906 logs.go:276] 2 containers: [540584041ca0 3ce2ac2fc4e2]
	I0923 04:31:24.757682   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:31:24.768327   20906 logs.go:276] 1 containers: [4b25be594587]
	I0923 04:31:24.768417   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:31:24.778834   20906 logs.go:276] 2 containers: [ee52514e5b96 2143f864eac3]
	I0923 04:31:24.778924   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:31:24.797609   20906 logs.go:276] 1 containers: [54a06cb05e6b]
	I0923 04:31:24.797692   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:31:24.808302   20906 logs.go:276] 2 containers: [417011b5ad54 280cd681106d]
	I0923 04:31:24.808380   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:31:24.819109   20906 logs.go:276] 0 containers: []
	W0923 04:31:24.819121   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:31:24.819190   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:31:24.829693   20906 logs.go:276] 2 containers: [f0829c6e70e7 7b391cd17bc7]
	I0923 04:31:24.829713   20906 logs.go:123] Gathering logs for kube-apiserver [878adc2fd22f] ...
	I0923 04:31:24.829719   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878adc2fd22f"
	I0923 04:31:24.855085   20906 logs.go:123] Gathering logs for kube-controller-manager [417011b5ad54] ...
	I0923 04:31:24.855100   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 417011b5ad54"
	I0923 04:31:24.872454   20906 logs.go:123] Gathering logs for storage-provisioner [7b391cd17bc7] ...
	I0923 04:31:24.872464   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b391cd17bc7"
	I0923 04:31:24.883818   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:31:24.883828   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:31:24.908026   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:31:24.908034   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:31:24.912340   20906 logs.go:123] Gathering logs for etcd [540584041ca0] ...
	I0923 04:31:24.912350   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 540584041ca0"
	I0923 04:31:24.926282   20906 logs.go:123] Gathering logs for kube-scheduler [ee52514e5b96] ...
	I0923 04:31:24.926292   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee52514e5b96"
	I0923 04:31:24.937774   20906 logs.go:123] Gathering logs for kube-scheduler [2143f864eac3] ...
	I0923 04:31:24.937798   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2143f864eac3"
	I0923 04:31:24.952279   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:31:24.952289   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:31:24.990954   20906 logs.go:123] Gathering logs for etcd [3ce2ac2fc4e2] ...
	I0923 04:31:24.990970   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce2ac2fc4e2"
	I0923 04:31:25.010986   20906 logs.go:123] Gathering logs for kube-proxy [54a06cb05e6b] ...
	I0923 04:31:25.010996   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54a06cb05e6b"
	I0923 04:31:25.022793   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:31:25.022808   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:31:25.035110   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:31:25.035121   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:31:25.073594   20906 logs.go:123] Gathering logs for coredns [4b25be594587] ...
	I0923 04:31:25.073603   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b25be594587"
	I0923 04:31:25.087286   20906 logs.go:123] Gathering logs for kube-controller-manager [280cd681106d] ...
	I0923 04:31:25.087298   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 280cd681106d"
	I0923 04:31:25.099483   20906 logs.go:123] Gathering logs for storage-provisioner [f0829c6e70e7] ...
	I0923 04:31:25.099498   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0829c6e70e7"
	I0923 04:31:25.111168   20906 logs.go:123] Gathering logs for kube-apiserver [ba4be4c164d3] ...
	I0923 04:31:25.111179   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4be4c164d3"
	I0923 04:31:27.629923   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:31:32.631378   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:31:32.631790   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:31:32.662037   20906 logs.go:276] 2 containers: [ba4be4c164d3 878adc2fd22f]
	I0923 04:31:32.662225   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:31:32.680979   20906 logs.go:276] 2 containers: [540584041ca0 3ce2ac2fc4e2]
	I0923 04:31:32.681095   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:31:32.696038   20906 logs.go:276] 1 containers: [4b25be594587]
	I0923 04:31:32.696126   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:31:32.707449   20906 logs.go:276] 2 containers: [ee52514e5b96 2143f864eac3]
	I0923 04:31:32.707540   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:31:32.718255   20906 logs.go:276] 1 containers: [54a06cb05e6b]
	I0923 04:31:32.718334   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:31:32.728815   20906 logs.go:276] 2 containers: [417011b5ad54 280cd681106d]
	I0923 04:31:32.728887   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:31:32.739019   20906 logs.go:276] 0 containers: []
	W0923 04:31:32.739032   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:31:32.739103   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:31:32.750049   20906 logs.go:276] 2 containers: [f0829c6e70e7 7b391cd17bc7]
	I0923 04:31:32.750069   20906 logs.go:123] Gathering logs for etcd [540584041ca0] ...
	I0923 04:31:32.750074   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 540584041ca0"
	I0923 04:31:32.764220   20906 logs.go:123] Gathering logs for kube-controller-manager [417011b5ad54] ...
	I0923 04:31:32.764229   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 417011b5ad54"
	I0923 04:31:32.781039   20906 logs.go:123] Gathering logs for kube-controller-manager [280cd681106d] ...
	I0923 04:31:32.781055   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 280cd681106d"
	I0923 04:31:32.792693   20906 logs.go:123] Gathering logs for storage-provisioner [f0829c6e70e7] ...
	I0923 04:31:32.792704   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0829c6e70e7"
	I0923 04:31:32.803977   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:31:32.803990   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:31:32.840535   20906 logs.go:123] Gathering logs for kube-apiserver [ba4be4c164d3] ...
	I0923 04:31:32.840543   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4be4c164d3"
	I0923 04:31:32.854145   20906 logs.go:123] Gathering logs for kube-scheduler [ee52514e5b96] ...
	I0923 04:31:32.854154   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee52514e5b96"
	I0923 04:31:32.865645   20906 logs.go:123] Gathering logs for kube-proxy [54a06cb05e6b] ...
	I0923 04:31:32.865657   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54a06cb05e6b"
	I0923 04:31:32.877016   20906 logs.go:123] Gathering logs for storage-provisioner [7b391cd17bc7] ...
	I0923 04:31:32.877025   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b391cd17bc7"
	I0923 04:31:32.888520   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:31:32.888532   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:31:32.921735   20906 logs.go:123] Gathering logs for kube-apiserver [878adc2fd22f] ...
	I0923 04:31:32.921746   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878adc2fd22f"
	I0923 04:31:32.946951   20906 logs.go:123] Gathering logs for kube-scheduler [2143f864eac3] ...
	I0923 04:31:32.946960   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2143f864eac3"
	I0923 04:31:32.962971   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:31:32.962981   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:31:32.989319   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:31:32.989336   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:31:33.001748   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:31:33.001763   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:31:33.006009   20906 logs.go:123] Gathering logs for coredns [4b25be594587] ...
	I0923 04:31:33.006017   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b25be594587"
	I0923 04:31:33.017871   20906 logs.go:123] Gathering logs for etcd [3ce2ac2fc4e2] ...
	I0923 04:31:33.017887   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce2ac2fc4e2"
	I0923 04:31:35.536417   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:31:40.538810   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:31:40.539467   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:31:40.585010   20906 logs.go:276] 2 containers: [ba4be4c164d3 878adc2fd22f]
	I0923 04:31:40.585179   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:31:40.605381   20906 logs.go:276] 2 containers: [540584041ca0 3ce2ac2fc4e2]
	I0923 04:31:40.605497   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:31:40.629013   20906 logs.go:276] 1 containers: [4b25be594587]
	I0923 04:31:40.629096   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:31:40.639789   20906 logs.go:276] 2 containers: [ee52514e5b96 2143f864eac3]
	I0923 04:31:40.639871   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:31:40.650434   20906 logs.go:276] 1 containers: [54a06cb05e6b]
	I0923 04:31:40.650517   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:31:40.661822   20906 logs.go:276] 2 containers: [417011b5ad54 280cd681106d]
	I0923 04:31:40.661894   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:31:40.672225   20906 logs.go:276] 0 containers: []
	W0923 04:31:40.672241   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:31:40.672298   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:31:40.686859   20906 logs.go:276] 2 containers: [f0829c6e70e7 7b391cd17bc7]
	I0923 04:31:40.686879   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:31:40.686884   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:31:40.691563   20906 logs.go:123] Gathering logs for kube-apiserver [ba4be4c164d3] ...
	I0923 04:31:40.691573   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4be4c164d3"
	I0923 04:31:40.705920   20906 logs.go:123] Gathering logs for kube-apiserver [878adc2fd22f] ...
	I0923 04:31:40.705930   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878adc2fd22f"
	I0923 04:31:40.735807   20906 logs.go:123] Gathering logs for kube-controller-manager [417011b5ad54] ...
	I0923 04:31:40.735820   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 417011b5ad54"
	I0923 04:31:40.753722   20906 logs.go:123] Gathering logs for kube-controller-manager [280cd681106d] ...
	I0923 04:31:40.753731   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 280cd681106d"
	I0923 04:31:40.765347   20906 logs.go:123] Gathering logs for storage-provisioner [f0829c6e70e7] ...
	I0923 04:31:40.765357   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0829c6e70e7"
	I0923 04:31:40.776752   20906 logs.go:123] Gathering logs for storage-provisioner [7b391cd17bc7] ...
	I0923 04:31:40.776767   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b391cd17bc7"
	I0923 04:31:40.788005   20906 logs.go:123] Gathering logs for etcd [3ce2ac2fc4e2] ...
	I0923 04:31:40.788044   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce2ac2fc4e2"
	I0923 04:31:40.802721   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:31:40.802732   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:31:40.816756   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:31:40.816767   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:31:40.850954   20906 logs.go:123] Gathering logs for etcd [540584041ca0] ...
	I0923 04:31:40.850966   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 540584041ca0"
	I0923 04:31:40.865435   20906 logs.go:123] Gathering logs for kube-scheduler [2143f864eac3] ...
	I0923 04:31:40.865446   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2143f864eac3"
	I0923 04:31:40.880623   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:31:40.880633   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:31:40.916032   20906 logs.go:123] Gathering logs for coredns [4b25be594587] ...
	I0923 04:31:40.916042   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b25be594587"
	I0923 04:31:40.926700   20906 logs.go:123] Gathering logs for kube-scheduler [ee52514e5b96] ...
	I0923 04:31:40.926715   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee52514e5b96"
	I0923 04:31:40.938325   20906 logs.go:123] Gathering logs for kube-proxy [54a06cb05e6b] ...
	I0923 04:31:40.938339   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54a06cb05e6b"
	I0923 04:31:40.949790   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:31:40.949799   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:31:43.477469   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:31:48.477923   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:31:48.478439   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:31:48.514077   20906 logs.go:276] 2 containers: [ba4be4c164d3 878adc2fd22f]
	I0923 04:31:48.514254   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:31:48.538084   20906 logs.go:276] 2 containers: [540584041ca0 3ce2ac2fc4e2]
	I0923 04:31:48.538190   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:31:48.555791   20906 logs.go:276] 1 containers: [4b25be594587]
	I0923 04:31:48.555878   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:31:48.566570   20906 logs.go:276] 2 containers: [ee52514e5b96 2143f864eac3]
	I0923 04:31:48.566656   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:31:48.577316   20906 logs.go:276] 1 containers: [54a06cb05e6b]
	I0923 04:31:48.577396   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:31:48.588132   20906 logs.go:276] 2 containers: [417011b5ad54 280cd681106d]
	I0923 04:31:48.588208   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:31:48.598878   20906 logs.go:276] 0 containers: []
	W0923 04:31:48.598891   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:31:48.598966   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:31:48.610096   20906 logs.go:276] 2 containers: [f0829c6e70e7 7b391cd17bc7]
	I0923 04:31:48.610114   20906 logs.go:123] Gathering logs for kube-apiserver [ba4be4c164d3] ...
	I0923 04:31:48.610119   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4be4c164d3"
	I0923 04:31:48.636525   20906 logs.go:123] Gathering logs for etcd [3ce2ac2fc4e2] ...
	I0923 04:31:48.636540   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce2ac2fc4e2"
	I0923 04:31:48.659560   20906 logs.go:123] Gathering logs for kube-scheduler [2143f864eac3] ...
	I0923 04:31:48.659577   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2143f864eac3"
	I0923 04:31:48.676419   20906 logs.go:123] Gathering logs for storage-provisioner [f0829c6e70e7] ...
	I0923 04:31:48.676430   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0829c6e70e7"
	I0923 04:31:48.687876   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:31:48.687891   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:31:48.725047   20906 logs.go:123] Gathering logs for coredns [4b25be594587] ...
	I0923 04:31:48.725056   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b25be594587"
	I0923 04:31:48.736665   20906 logs.go:123] Gathering logs for kube-scheduler [ee52514e5b96] ...
	I0923 04:31:48.736675   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee52514e5b96"
	I0923 04:31:48.750239   20906 logs.go:123] Gathering logs for storage-provisioner [7b391cd17bc7] ...
	I0923 04:31:48.750248   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b391cd17bc7"
	I0923 04:31:48.761536   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:31:48.761547   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:31:48.773536   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:31:48.773553   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:31:48.778232   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:31:48.778239   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:31:48.814112   20906 logs.go:123] Gathering logs for kube-apiserver [878adc2fd22f] ...
	I0923 04:31:48.814128   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878adc2fd22f"
	I0923 04:31:48.839191   20906 logs.go:123] Gathering logs for etcd [540584041ca0] ...
	I0923 04:31:48.839202   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 540584041ca0"
	I0923 04:31:48.853215   20906 logs.go:123] Gathering logs for kube-proxy [54a06cb05e6b] ...
	I0923 04:31:48.853229   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54a06cb05e6b"
	I0923 04:31:48.873738   20906 logs.go:123] Gathering logs for kube-controller-manager [417011b5ad54] ...
	I0923 04:31:48.873752   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 417011b5ad54"
	I0923 04:31:48.891340   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:31:48.891352   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:31:48.915615   20906 logs.go:123] Gathering logs for kube-controller-manager [280cd681106d] ...
	I0923 04:31:48.915624   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 280cd681106d"
	I0923 04:31:51.429538   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:31:56.432389   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:31:56.432992   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:31:56.472202   20906 logs.go:276] 2 containers: [ba4be4c164d3 878adc2fd22f]
	I0923 04:31:56.472405   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:31:56.493742   20906 logs.go:276] 2 containers: [540584041ca0 3ce2ac2fc4e2]
	I0923 04:31:56.493871   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:31:56.508954   20906 logs.go:276] 1 containers: [4b25be594587]
	I0923 04:31:56.509046   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:31:56.521413   20906 logs.go:276] 2 containers: [ee52514e5b96 2143f864eac3]
	I0923 04:31:56.521501   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:31:56.532112   20906 logs.go:276] 1 containers: [54a06cb05e6b]
	I0923 04:31:56.532192   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:31:56.542183   20906 logs.go:276] 2 containers: [417011b5ad54 280cd681106d]
	I0923 04:31:56.542269   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:31:56.551939   20906 logs.go:276] 0 containers: []
	W0923 04:31:56.551952   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:31:56.552022   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:31:56.565897   20906 logs.go:276] 2 containers: [f0829c6e70e7 7b391cd17bc7]
	I0923 04:31:56.565913   20906 logs.go:123] Gathering logs for storage-provisioner [7b391cd17bc7] ...
	I0923 04:31:56.565918   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b391cd17bc7"
	I0923 04:31:56.576918   20906 logs.go:123] Gathering logs for etcd [540584041ca0] ...
	I0923 04:31:56.576930   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 540584041ca0"
	I0923 04:31:56.590385   20906 logs.go:123] Gathering logs for kube-controller-manager [280cd681106d] ...
	I0923 04:31:56.590395   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 280cd681106d"
	I0923 04:31:56.606557   20906 logs.go:123] Gathering logs for storage-provisioner [f0829c6e70e7] ...
	I0923 04:31:56.606573   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0829c6e70e7"
	I0923 04:31:56.617690   20906 logs.go:123] Gathering logs for kube-scheduler [ee52514e5b96] ...
	I0923 04:31:56.617700   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee52514e5b96"
	I0923 04:31:56.629901   20906 logs.go:123] Gathering logs for kube-scheduler [2143f864eac3] ...
	I0923 04:31:56.629915   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2143f864eac3"
	I0923 04:31:56.648248   20906 logs.go:123] Gathering logs for kube-proxy [54a06cb05e6b] ...
	I0923 04:31:56.648258   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54a06cb05e6b"
	I0923 04:31:56.660166   20906 logs.go:123] Gathering logs for kube-controller-manager [417011b5ad54] ...
	I0923 04:31:56.660176   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 417011b5ad54"
	I0923 04:31:56.685886   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:31:56.685896   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:31:56.723751   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:31:56.723776   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:31:56.728023   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:31:56.728030   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:31:56.764658   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:31:56.764673   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:31:56.776291   20906 logs.go:123] Gathering logs for kube-apiserver [ba4be4c164d3] ...
	I0923 04:31:56.776307   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4be4c164d3"
	I0923 04:31:56.790433   20906 logs.go:123] Gathering logs for kube-apiserver [878adc2fd22f] ...
	I0923 04:31:56.790444   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878adc2fd22f"
	I0923 04:31:56.816919   20906 logs.go:123] Gathering logs for etcd [3ce2ac2fc4e2] ...
	I0923 04:31:56.816935   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce2ac2fc4e2"
	I0923 04:31:56.831911   20906 logs.go:123] Gathering logs for coredns [4b25be594587] ...
	I0923 04:31:56.831921   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b25be594587"
	I0923 04:31:56.843177   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:31:56.843190   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:31:59.368693   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:32:04.369116   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:32:04.369356   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:32:04.391828   20906 logs.go:276] 2 containers: [ba4be4c164d3 878adc2fd22f]
	I0923 04:32:04.391952   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:32:04.407543   20906 logs.go:276] 2 containers: [540584041ca0 3ce2ac2fc4e2]
	I0923 04:32:04.407640   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:32:04.420359   20906 logs.go:276] 1 containers: [4b25be594587]
	I0923 04:32:04.420440   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:32:04.431137   20906 logs.go:276] 2 containers: [ee52514e5b96 2143f864eac3]
	I0923 04:32:04.431221   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:32:04.442161   20906 logs.go:276] 1 containers: [54a06cb05e6b]
	I0923 04:32:04.442245   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:32:04.452823   20906 logs.go:276] 2 containers: [417011b5ad54 280cd681106d]
	I0923 04:32:04.452901   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:32:04.463243   20906 logs.go:276] 0 containers: []
	W0923 04:32:04.463256   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:32:04.463325   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:32:04.473911   20906 logs.go:276] 2 containers: [f0829c6e70e7 7b391cd17bc7]
	I0923 04:32:04.473928   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:32:04.473933   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:32:04.499103   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:32:04.499111   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:32:04.511215   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:32:04.511227   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:32:04.547236   20906 logs.go:123] Gathering logs for etcd [540584041ca0] ...
	I0923 04:32:04.547246   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 540584041ca0"
	I0923 04:32:04.561265   20906 logs.go:123] Gathering logs for coredns [4b25be594587] ...
	I0923 04:32:04.561273   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b25be594587"
	I0923 04:32:04.572140   20906 logs.go:123] Gathering logs for kube-scheduler [ee52514e5b96] ...
	I0923 04:32:04.572156   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee52514e5b96"
	I0923 04:32:04.584155   20906 logs.go:123] Gathering logs for storage-provisioner [7b391cd17bc7] ...
	I0923 04:32:04.584171   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b391cd17bc7"
	I0923 04:32:04.595886   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:32:04.595898   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:32:04.631888   20906 logs.go:123] Gathering logs for kube-apiserver [878adc2fd22f] ...
	I0923 04:32:04.631897   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878adc2fd22f"
	I0923 04:32:04.657053   20906 logs.go:123] Gathering logs for kube-scheduler [2143f864eac3] ...
	I0923 04:32:04.657063   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2143f864eac3"
	I0923 04:32:04.672755   20906 logs.go:123] Gathering logs for kube-proxy [54a06cb05e6b] ...
	I0923 04:32:04.672764   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54a06cb05e6b"
	I0923 04:32:04.689032   20906 logs.go:123] Gathering logs for kube-apiserver [ba4be4c164d3] ...
	I0923 04:32:04.689048   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4be4c164d3"
	I0923 04:32:04.703322   20906 logs.go:123] Gathering logs for etcd [3ce2ac2fc4e2] ...
	I0923 04:32:04.703334   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce2ac2fc4e2"
	I0923 04:32:04.730441   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:32:04.730457   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:32:04.735048   20906 logs.go:123] Gathering logs for kube-controller-manager [417011b5ad54] ...
	I0923 04:32:04.735053   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 417011b5ad54"
	I0923 04:32:04.761773   20906 logs.go:123] Gathering logs for kube-controller-manager [280cd681106d] ...
	I0923 04:32:04.761785   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 280cd681106d"
	I0923 04:32:04.774644   20906 logs.go:123] Gathering logs for storage-provisioner [f0829c6e70e7] ...
	I0923 04:32:04.774659   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0829c6e70e7"
	I0923 04:32:07.288264   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:32:12.290976   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:32:12.291438   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:32:12.328228   20906 logs.go:276] 2 containers: [ba4be4c164d3 878adc2fd22f]
	I0923 04:32:12.328392   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:32:12.347803   20906 logs.go:276] 2 containers: [540584041ca0 3ce2ac2fc4e2]
	I0923 04:32:12.347929   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:32:12.362413   20906 logs.go:276] 1 containers: [4b25be594587]
	I0923 04:32:12.362509   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:32:12.375059   20906 logs.go:276] 2 containers: [ee52514e5b96 2143f864eac3]
	I0923 04:32:12.375146   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:32:12.386215   20906 logs.go:276] 1 containers: [54a06cb05e6b]
	I0923 04:32:12.386297   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:32:12.396908   20906 logs.go:276] 2 containers: [417011b5ad54 280cd681106d]
	I0923 04:32:12.396996   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:32:12.407942   20906 logs.go:276] 0 containers: []
	W0923 04:32:12.407954   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:32:12.408021   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:32:12.418915   20906 logs.go:276] 2 containers: [f0829c6e70e7 7b391cd17bc7]
	I0923 04:32:12.418934   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:32:12.418940   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:32:12.453788   20906 logs.go:123] Gathering logs for kube-proxy [54a06cb05e6b] ...
	I0923 04:32:12.453803   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54a06cb05e6b"
	I0923 04:32:12.466137   20906 logs.go:123] Gathering logs for etcd [540584041ca0] ...
	I0923 04:32:12.466151   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 540584041ca0"
	I0923 04:32:12.484240   20906 logs.go:123] Gathering logs for kube-controller-manager [280cd681106d] ...
	I0923 04:32:12.484249   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 280cd681106d"
	I0923 04:32:12.496655   20906 logs.go:123] Gathering logs for storage-provisioner [7b391cd17bc7] ...
	I0923 04:32:12.496667   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b391cd17bc7"
	I0923 04:32:12.507943   20906 logs.go:123] Gathering logs for kube-scheduler [ee52514e5b96] ...
	I0923 04:32:12.507954   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee52514e5b96"
	I0923 04:32:12.520562   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:32:12.520572   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:32:12.524663   20906 logs.go:123] Gathering logs for kube-apiserver [ba4be4c164d3] ...
	I0923 04:32:12.524671   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4be4c164d3"
	I0923 04:32:12.539260   20906 logs.go:123] Gathering logs for kube-apiserver [878adc2fd22f] ...
	I0923 04:32:12.539270   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878adc2fd22f"
	I0923 04:32:12.563755   20906 logs.go:123] Gathering logs for etcd [3ce2ac2fc4e2] ...
	I0923 04:32:12.563770   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce2ac2fc4e2"
	I0923 04:32:12.582242   20906 logs.go:123] Gathering logs for coredns [4b25be594587] ...
	I0923 04:32:12.582252   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b25be594587"
	I0923 04:32:12.593484   20906 logs.go:123] Gathering logs for kube-scheduler [2143f864eac3] ...
	I0923 04:32:12.593494   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2143f864eac3"
	I0923 04:32:12.609590   20906 logs.go:123] Gathering logs for kube-controller-manager [417011b5ad54] ...
	I0923 04:32:12.609605   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 417011b5ad54"
	I0923 04:32:12.630617   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:32:12.630628   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:32:12.666977   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:32:12.666986   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:32:12.691472   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:32:12.691479   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:32:12.703048   20906 logs.go:123] Gathering logs for storage-provisioner [f0829c6e70e7] ...
	I0923 04:32:12.703064   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0829c6e70e7"
	I0923 04:32:15.217618   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:32:20.219891   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:32:20.220122   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:32:20.239554   20906 logs.go:276] 2 containers: [ba4be4c164d3 878adc2fd22f]
	I0923 04:32:20.239671   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:32:20.254112   20906 logs.go:276] 2 containers: [540584041ca0 3ce2ac2fc4e2]
	I0923 04:32:20.254191   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:32:20.273205   20906 logs.go:276] 1 containers: [4b25be594587]
	I0923 04:32:20.273287   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:32:20.283965   20906 logs.go:276] 2 containers: [ee52514e5b96 2143f864eac3]
	I0923 04:32:20.284055   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:32:20.294433   20906 logs.go:276] 1 containers: [54a06cb05e6b]
	I0923 04:32:20.294513   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:32:20.305478   20906 logs.go:276] 2 containers: [417011b5ad54 280cd681106d]
	I0923 04:32:20.305554   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:32:20.315855   20906 logs.go:276] 0 containers: []
	W0923 04:32:20.315866   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:32:20.315938   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:32:20.326570   20906 logs.go:276] 2 containers: [f0829c6e70e7 7b391cd17bc7]
	I0923 04:32:20.326587   20906 logs.go:123] Gathering logs for kube-apiserver [ba4be4c164d3] ...
	I0923 04:32:20.326592   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4be4c164d3"
	I0923 04:32:20.340946   20906 logs.go:123] Gathering logs for etcd [540584041ca0] ...
	I0923 04:32:20.340957   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 540584041ca0"
	I0923 04:32:20.357666   20906 logs.go:123] Gathering logs for coredns [4b25be594587] ...
	I0923 04:32:20.357681   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b25be594587"
	I0923 04:32:20.368636   20906 logs.go:123] Gathering logs for kube-scheduler [ee52514e5b96] ...
	I0923 04:32:20.368652   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee52514e5b96"
	I0923 04:32:20.380357   20906 logs.go:123] Gathering logs for kube-proxy [54a06cb05e6b] ...
	I0923 04:32:20.380369   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54a06cb05e6b"
	I0923 04:32:20.395409   20906 logs.go:123] Gathering logs for storage-provisioner [7b391cd17bc7] ...
	I0923 04:32:20.395418   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b391cd17bc7"
	I0923 04:32:20.406548   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:32:20.406559   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:32:20.446178   20906 logs.go:123] Gathering logs for kube-controller-manager [280cd681106d] ...
	I0923 04:32:20.446193   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 280cd681106d"
	I0923 04:32:20.457807   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:32:20.457822   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:32:20.482002   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:32:20.482010   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:32:20.493706   20906 logs.go:123] Gathering logs for kube-scheduler [2143f864eac3] ...
	I0923 04:32:20.493720   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2143f864eac3"
	I0923 04:32:20.510870   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:32:20.510884   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:32:20.515616   20906 logs.go:123] Gathering logs for storage-provisioner [f0829c6e70e7] ...
	I0923 04:32:20.515623   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0829c6e70e7"
	I0923 04:32:20.527704   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:32:20.527716   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:32:20.565771   20906 logs.go:123] Gathering logs for etcd [3ce2ac2fc4e2] ...
	I0923 04:32:20.565780   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce2ac2fc4e2"
	I0923 04:32:20.580126   20906 logs.go:123] Gathering logs for kube-controller-manager [417011b5ad54] ...
	I0923 04:32:20.580139   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 417011b5ad54"
	I0923 04:32:20.597384   20906 logs.go:123] Gathering logs for kube-apiserver [878adc2fd22f] ...
	I0923 04:32:20.597398   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878adc2fd22f"
	I0923 04:32:23.123882   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:32:28.124770   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:32:28.125021   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:32:28.151299   20906 logs.go:276] 2 containers: [ba4be4c164d3 878adc2fd22f]
	I0923 04:32:28.151411   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:32:28.165824   20906 logs.go:276] 2 containers: [540584041ca0 3ce2ac2fc4e2]
	I0923 04:32:28.165920   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:32:28.176863   20906 logs.go:276] 1 containers: [4b25be594587]
	I0923 04:32:28.176948   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:32:28.187556   20906 logs.go:276] 2 containers: [ee52514e5b96 2143f864eac3]
	I0923 04:32:28.187633   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:32:28.197778   20906 logs.go:276] 1 containers: [54a06cb05e6b]
	I0923 04:32:28.197863   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:32:28.208449   20906 logs.go:276] 2 containers: [417011b5ad54 280cd681106d]
	I0923 04:32:28.208535   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:32:28.218628   20906 logs.go:276] 0 containers: []
	W0923 04:32:28.218638   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:32:28.218700   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:32:28.229282   20906 logs.go:276] 2 containers: [f0829c6e70e7 7b391cd17bc7]
	I0923 04:32:28.229300   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:32:28.229305   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:32:28.254115   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:32:28.254123   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:32:28.293225   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:32:28.293234   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:32:28.297548   20906 logs.go:123] Gathering logs for kube-apiserver [ba4be4c164d3] ...
	I0923 04:32:28.297555   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4be4c164d3"
	I0923 04:32:28.311276   20906 logs.go:123] Gathering logs for kube-scheduler [ee52514e5b96] ...
	I0923 04:32:28.311286   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee52514e5b96"
	I0923 04:32:28.323645   20906 logs.go:123] Gathering logs for kube-controller-manager [417011b5ad54] ...
	I0923 04:32:28.323657   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 417011b5ad54"
	I0923 04:32:28.340778   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:32:28.340789   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:32:28.376127   20906 logs.go:123] Gathering logs for etcd [3ce2ac2fc4e2] ...
	I0923 04:32:28.376139   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce2ac2fc4e2"
	I0923 04:32:28.392560   20906 logs.go:123] Gathering logs for kube-scheduler [2143f864eac3] ...
	I0923 04:32:28.392570   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2143f864eac3"
	I0923 04:32:28.407768   20906 logs.go:123] Gathering logs for kube-proxy [54a06cb05e6b] ...
	I0923 04:32:28.407779   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54a06cb05e6b"
	I0923 04:32:28.419601   20906 logs.go:123] Gathering logs for storage-provisioner [f0829c6e70e7] ...
	I0923 04:32:28.419612   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0829c6e70e7"
	I0923 04:32:28.431413   20906 logs.go:123] Gathering logs for storage-provisioner [7b391cd17bc7] ...
	I0923 04:32:28.431424   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b391cd17bc7"
	I0923 04:32:28.443019   20906 logs.go:123] Gathering logs for kube-apiserver [878adc2fd22f] ...
	I0923 04:32:28.443030   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878adc2fd22f"
	I0923 04:32:28.469486   20906 logs.go:123] Gathering logs for etcd [540584041ca0] ...
	I0923 04:32:28.469500   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 540584041ca0"
	I0923 04:32:28.483158   20906 logs.go:123] Gathering logs for coredns [4b25be594587] ...
	I0923 04:32:28.483169   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b25be594587"
	I0923 04:32:28.495085   20906 logs.go:123] Gathering logs for kube-controller-manager [280cd681106d] ...
	I0923 04:32:28.495096   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 280cd681106d"
	I0923 04:32:28.507772   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:32:28.507782   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:32:31.021502   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:32:36.023886   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:32:36.024071   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:32:36.039966   20906 logs.go:276] 2 containers: [ba4be4c164d3 878adc2fd22f]
	I0923 04:32:36.040066   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:32:36.053690   20906 logs.go:276] 2 containers: [540584041ca0 3ce2ac2fc4e2]
	I0923 04:32:36.053784   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:32:36.064832   20906 logs.go:276] 1 containers: [4b25be594587]
	I0923 04:32:36.064915   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:32:36.076599   20906 logs.go:276] 2 containers: [ee52514e5b96 2143f864eac3]
	I0923 04:32:36.076687   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:32:36.087081   20906 logs.go:276] 1 containers: [54a06cb05e6b]
	I0923 04:32:36.087151   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:32:36.097771   20906 logs.go:276] 2 containers: [417011b5ad54 280cd681106d]
	I0923 04:32:36.097852   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:32:36.107946   20906 logs.go:276] 0 containers: []
	W0923 04:32:36.107957   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:32:36.108017   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:32:36.118051   20906 logs.go:276] 2 containers: [f0829c6e70e7 7b391cd17bc7]
	I0923 04:32:36.118069   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:32:36.118075   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:32:36.156618   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:32:36.156627   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:32:36.191482   20906 logs.go:123] Gathering logs for etcd [3ce2ac2fc4e2] ...
	I0923 04:32:36.191493   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce2ac2fc4e2"
	I0923 04:32:36.211317   20906 logs.go:123] Gathering logs for kube-apiserver [878adc2fd22f] ...
	I0923 04:32:36.211330   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878adc2fd22f"
	I0923 04:32:36.235646   20906 logs.go:123] Gathering logs for etcd [540584041ca0] ...
	I0923 04:32:36.235659   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 540584041ca0"
	I0923 04:32:36.249553   20906 logs.go:123] Gathering logs for storage-provisioner [f0829c6e70e7] ...
	I0923 04:32:36.249563   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0829c6e70e7"
	I0923 04:32:36.265337   20906 logs.go:123] Gathering logs for storage-provisioner [7b391cd17bc7] ...
	I0923 04:32:36.265353   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b391cd17bc7"
	I0923 04:32:36.280136   20906 logs.go:123] Gathering logs for kube-apiserver [ba4be4c164d3] ...
	I0923 04:32:36.280147   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4be4c164d3"
	I0923 04:32:36.294091   20906 logs.go:123] Gathering logs for kube-controller-manager [417011b5ad54] ...
	I0923 04:32:36.294101   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 417011b5ad54"
	I0923 04:32:36.311242   20906 logs.go:123] Gathering logs for kube-controller-manager [280cd681106d] ...
	I0923 04:32:36.311256   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 280cd681106d"
	I0923 04:32:36.322631   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:32:36.322642   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:32:36.334798   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:32:36.334812   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:32:36.339350   20906 logs.go:123] Gathering logs for coredns [4b25be594587] ...
	I0923 04:32:36.339356   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b25be594587"
	I0923 04:32:36.357136   20906 logs.go:123] Gathering logs for kube-scheduler [ee52514e5b96] ...
	I0923 04:32:36.357146   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee52514e5b96"
	I0923 04:32:36.369006   20906 logs.go:123] Gathering logs for kube-scheduler [2143f864eac3] ...
	I0923 04:32:36.369020   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2143f864eac3"
	I0923 04:32:36.384166   20906 logs.go:123] Gathering logs for kube-proxy [54a06cb05e6b] ...
	I0923 04:32:36.384181   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54a06cb05e6b"
	I0923 04:32:36.395578   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:32:36.395590   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:32:38.922061   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:32:43.924734   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:32:43.924981   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:32:43.944911   20906 logs.go:276] 2 containers: [ba4be4c164d3 878adc2fd22f]
	I0923 04:32:43.945024   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:32:43.960093   20906 logs.go:276] 2 containers: [540584041ca0 3ce2ac2fc4e2]
	I0923 04:32:43.960175   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:32:43.971899   20906 logs.go:276] 1 containers: [4b25be594587]
	I0923 04:32:43.971969   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:32:43.983161   20906 logs.go:276] 2 containers: [ee52514e5b96 2143f864eac3]
	I0923 04:32:43.983246   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:32:43.993732   20906 logs.go:276] 1 containers: [54a06cb05e6b]
	I0923 04:32:43.993805   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:32:44.004551   20906 logs.go:276] 2 containers: [417011b5ad54 280cd681106d]
	I0923 04:32:44.004635   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:32:44.014383   20906 logs.go:276] 0 containers: []
	W0923 04:32:44.014395   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:32:44.014466   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:32:44.024642   20906 logs.go:276] 2 containers: [f0829c6e70e7 7b391cd17bc7]
	I0923 04:32:44.024660   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:32:44.024664   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:32:44.060701   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:32:44.060718   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:32:44.099048   20906 logs.go:123] Gathering logs for kube-scheduler [ee52514e5b96] ...
	I0923 04:32:44.099060   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee52514e5b96"
	I0923 04:32:44.111672   20906 logs.go:123] Gathering logs for kube-proxy [54a06cb05e6b] ...
	I0923 04:32:44.111682   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54a06cb05e6b"
	I0923 04:32:44.123390   20906 logs.go:123] Gathering logs for kube-controller-manager [417011b5ad54] ...
	I0923 04:32:44.123402   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 417011b5ad54"
	I0923 04:32:44.140469   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:32:44.140478   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:32:44.144639   20906 logs.go:123] Gathering logs for kube-apiserver [878adc2fd22f] ...
	I0923 04:32:44.144646   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878adc2fd22f"
	I0923 04:32:44.169046   20906 logs.go:123] Gathering logs for etcd [3ce2ac2fc4e2] ...
	I0923 04:32:44.169057   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce2ac2fc4e2"
	I0923 04:32:44.189166   20906 logs.go:123] Gathering logs for storage-provisioner [f0829c6e70e7] ...
	I0923 04:32:44.189182   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0829c6e70e7"
	I0923 04:32:44.200622   20906 logs.go:123] Gathering logs for storage-provisioner [7b391cd17bc7] ...
	I0923 04:32:44.200630   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b391cd17bc7"
	I0923 04:32:44.211887   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:32:44.211896   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:32:44.223398   20906 logs.go:123] Gathering logs for etcd [540584041ca0] ...
	I0923 04:32:44.223407   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 540584041ca0"
	I0923 04:32:44.237648   20906 logs.go:123] Gathering logs for kube-scheduler [2143f864eac3] ...
	I0923 04:32:44.237661   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2143f864eac3"
	I0923 04:32:44.255002   20906 logs.go:123] Gathering logs for kube-controller-manager [280cd681106d] ...
	I0923 04:32:44.255013   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 280cd681106d"
	I0923 04:32:44.266983   20906 logs.go:123] Gathering logs for kube-apiserver [ba4be4c164d3] ...
	I0923 04:32:44.266994   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4be4c164d3"
	I0923 04:32:44.280534   20906 logs.go:123] Gathering logs for coredns [4b25be594587] ...
	I0923 04:32:44.280548   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b25be594587"
	I0923 04:32:44.291721   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:32:44.291729   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:32:46.817045   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:32:51.819423   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:32:51.819667   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:32:51.841786   20906 logs.go:276] 2 containers: [ba4be4c164d3 878adc2fd22f]
	I0923 04:32:51.841910   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:32:51.858416   20906 logs.go:276] 2 containers: [540584041ca0 3ce2ac2fc4e2]
	I0923 04:32:51.858513   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:32:51.871057   20906 logs.go:276] 1 containers: [4b25be594587]
	I0923 04:32:51.871139   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:32:51.882360   20906 logs.go:276] 2 containers: [ee52514e5b96 2143f864eac3]
	I0923 04:32:51.882437   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:32:51.892685   20906 logs.go:276] 1 containers: [54a06cb05e6b]
	I0923 04:32:51.892768   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:32:51.902970   20906 logs.go:276] 2 containers: [417011b5ad54 280cd681106d]
	I0923 04:32:51.903050   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:32:51.913764   20906 logs.go:276] 0 containers: []
	W0923 04:32:51.913776   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:32:51.913847   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:32:51.923996   20906 logs.go:276] 2 containers: [f0829c6e70e7 7b391cd17bc7]
	I0923 04:32:51.924013   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:32:51.924019   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:32:51.958394   20906 logs.go:123] Gathering logs for kube-proxy [54a06cb05e6b] ...
	I0923 04:32:51.958410   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54a06cb05e6b"
	I0923 04:32:51.970822   20906 logs.go:123] Gathering logs for kube-apiserver [ba4be4c164d3] ...
	I0923 04:32:51.970838   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4be4c164d3"
	I0923 04:32:51.986500   20906 logs.go:123] Gathering logs for etcd [540584041ca0] ...
	I0923 04:32:51.986510   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 540584041ca0"
	I0923 04:32:52.000524   20906 logs.go:123] Gathering logs for kube-scheduler [2143f864eac3] ...
	I0923 04:32:52.000534   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2143f864eac3"
	I0923 04:32:52.016533   20906 logs.go:123] Gathering logs for storage-provisioner [7b391cd17bc7] ...
	I0923 04:32:52.016544   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b391cd17bc7"
	I0923 04:32:52.027421   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:32:52.027435   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:32:52.040562   20906 logs.go:123] Gathering logs for storage-provisioner [f0829c6e70e7] ...
	I0923 04:32:52.040574   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0829c6e70e7"
	I0923 04:32:52.051966   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:32:52.051981   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:32:52.091381   20906 logs.go:123] Gathering logs for coredns [4b25be594587] ...
	I0923 04:32:52.091393   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b25be594587"
	I0923 04:32:52.102498   20906 logs.go:123] Gathering logs for kube-scheduler [ee52514e5b96] ...
	I0923 04:32:52.102509   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee52514e5b96"
	I0923 04:32:52.114281   20906 logs.go:123] Gathering logs for kube-controller-manager [417011b5ad54] ...
	I0923 04:32:52.114293   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 417011b5ad54"
	I0923 04:32:52.132751   20906 logs.go:123] Gathering logs for kube-controller-manager [280cd681106d] ...
	I0923 04:32:52.132765   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 280cd681106d"
	I0923 04:32:52.150340   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:32:52.150352   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:32:52.154699   20906 logs.go:123] Gathering logs for kube-apiserver [878adc2fd22f] ...
	I0923 04:32:52.154709   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878adc2fd22f"
	I0923 04:32:52.180018   20906 logs.go:123] Gathering logs for etcd [3ce2ac2fc4e2] ...
	I0923 04:32:52.180028   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce2ac2fc4e2"
	I0923 04:32:52.200956   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:32:52.200971   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:32:54.727525   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:32:59.729938   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:32:59.730156   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:32:59.762138   20906 logs.go:276] 2 containers: [ba4be4c164d3 878adc2fd22f]
	I0923 04:32:59.762227   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:32:59.773505   20906 logs.go:276] 2 containers: [540584041ca0 3ce2ac2fc4e2]
	I0923 04:32:59.773588   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:32:59.783896   20906 logs.go:276] 1 containers: [4b25be594587]
	I0923 04:32:59.783981   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:32:59.798543   20906 logs.go:276] 2 containers: [ee52514e5b96 2143f864eac3]
	I0923 04:32:59.798630   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:32:59.809401   20906 logs.go:276] 1 containers: [54a06cb05e6b]
	I0923 04:32:59.809483   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:32:59.820064   20906 logs.go:276] 2 containers: [417011b5ad54 280cd681106d]
	I0923 04:32:59.820149   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:32:59.830226   20906 logs.go:276] 0 containers: []
	W0923 04:32:59.830243   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:32:59.830316   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:32:59.844545   20906 logs.go:276] 2 containers: [f0829c6e70e7 7b391cd17bc7]
	I0923 04:32:59.844564   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:32:59.844569   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:32:59.856649   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:32:59.856665   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:32:59.892126   20906 logs.go:123] Gathering logs for coredns [4b25be594587] ...
	I0923 04:32:59.892136   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b25be594587"
	I0923 04:32:59.903683   20906 logs.go:123] Gathering logs for kube-controller-manager [417011b5ad54] ...
	I0923 04:32:59.903693   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 417011b5ad54"
	I0923 04:32:59.920987   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:32:59.921000   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:32:59.943822   20906 logs.go:123] Gathering logs for kube-controller-manager [280cd681106d] ...
	I0923 04:32:59.943831   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 280cd681106d"
	I0923 04:32:59.956644   20906 logs.go:123] Gathering logs for etcd [540584041ca0] ...
	I0923 04:32:59.956654   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 540584041ca0"
	I0923 04:32:59.971514   20906 logs.go:123] Gathering logs for etcd [3ce2ac2fc4e2] ...
	I0923 04:32:59.971528   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce2ac2fc4e2"
	I0923 04:32:59.990895   20906 logs.go:123] Gathering logs for kube-proxy [54a06cb05e6b] ...
	I0923 04:32:59.990904   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54a06cb05e6b"
	I0923 04:33:00.002410   20906 logs.go:123] Gathering logs for kube-apiserver [878adc2fd22f] ...
	I0923 04:33:00.002422   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878adc2fd22f"
	I0923 04:33:00.027038   20906 logs.go:123] Gathering logs for kube-scheduler [ee52514e5b96] ...
	I0923 04:33:00.027048   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee52514e5b96"
	I0923 04:33:00.038172   20906 logs.go:123] Gathering logs for kube-scheduler [2143f864eac3] ...
	I0923 04:33:00.038182   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2143f864eac3"
	I0923 04:33:00.053161   20906 logs.go:123] Gathering logs for storage-provisioner [f0829c6e70e7] ...
	I0923 04:33:00.053172   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0829c6e70e7"
	I0923 04:33:00.064839   20906 logs.go:123] Gathering logs for storage-provisioner [7b391cd17bc7] ...
	I0923 04:33:00.064850   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b391cd17bc7"
	I0923 04:33:00.085175   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:33:00.085189   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:33:00.089396   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:33:00.089405   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:33:00.125394   20906 logs.go:123] Gathering logs for kube-apiserver [ba4be4c164d3] ...
	I0923 04:33:00.125405   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4be4c164d3"
	I0923 04:33:02.641665   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:33:07.644440   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:33:07.644873   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:33:07.680996   20906 logs.go:276] 2 containers: [ba4be4c164d3 878adc2fd22f]
	I0923 04:33:07.681166   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:33:07.703004   20906 logs.go:276] 2 containers: [540584041ca0 3ce2ac2fc4e2]
	I0923 04:33:07.703129   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:33:07.718289   20906 logs.go:276] 1 containers: [4b25be594587]
	I0923 04:33:07.718388   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:33:07.735368   20906 logs.go:276] 2 containers: [ee52514e5b96 2143f864eac3]
	I0923 04:33:07.735456   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:33:07.746409   20906 logs.go:276] 1 containers: [54a06cb05e6b]
	I0923 04:33:07.746488   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:33:07.757191   20906 logs.go:276] 2 containers: [417011b5ad54 280cd681106d]
	I0923 04:33:07.757277   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:33:07.767816   20906 logs.go:276] 0 containers: []
	W0923 04:33:07.767828   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:33:07.767902   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:33:07.778432   20906 logs.go:276] 2 containers: [f0829c6e70e7 7b391cd17bc7]
	I0923 04:33:07.778450   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:33:07.778455   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:33:07.816260   20906 logs.go:123] Gathering logs for kube-proxy [54a06cb05e6b] ...
	I0923 04:33:07.816270   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54a06cb05e6b"
	I0923 04:33:07.828606   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:33:07.828619   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:33:07.840184   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:33:07.840198   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:33:07.873760   20906 logs.go:123] Gathering logs for kube-controller-manager [280cd681106d] ...
	I0923 04:33:07.873774   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 280cd681106d"
	I0923 04:33:07.893756   20906 logs.go:123] Gathering logs for kube-scheduler [2143f864eac3] ...
	I0923 04:33:07.893773   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2143f864eac3"
	I0923 04:33:07.909997   20906 logs.go:123] Gathering logs for kube-apiserver [ba4be4c164d3] ...
	I0923 04:33:07.910012   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4be4c164d3"
	I0923 04:33:07.924166   20906 logs.go:123] Gathering logs for etcd [540584041ca0] ...
	I0923 04:33:07.924176   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 540584041ca0"
	I0923 04:33:07.938434   20906 logs.go:123] Gathering logs for coredns [4b25be594587] ...
	I0923 04:33:07.938444   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b25be594587"
	I0923 04:33:07.949865   20906 logs.go:123] Gathering logs for kube-scheduler [ee52514e5b96] ...
	I0923 04:33:07.949875   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee52514e5b96"
	I0923 04:33:07.973314   20906 logs.go:123] Gathering logs for storage-provisioner [f0829c6e70e7] ...
	I0923 04:33:07.973324   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0829c6e70e7"
	I0923 04:33:07.985323   20906 logs.go:123] Gathering logs for storage-provisioner [7b391cd17bc7] ...
	I0923 04:33:07.985336   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b391cd17bc7"
	I0923 04:33:07.998456   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:33:07.998466   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:33:08.023278   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:33:08.023288   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:33:08.027700   20906 logs.go:123] Gathering logs for kube-apiserver [878adc2fd22f] ...
	I0923 04:33:08.027710   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878adc2fd22f"
	I0923 04:33:08.052098   20906 logs.go:123] Gathering logs for etcd [3ce2ac2fc4e2] ...
	I0923 04:33:08.052107   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce2ac2fc4e2"
	I0923 04:33:08.066571   20906 logs.go:123] Gathering logs for kube-controller-manager [417011b5ad54] ...
	I0923 04:33:08.066581   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 417011b5ad54"
	I0923 04:33:10.586066   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:33:15.588501   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:33:15.588638   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:33:15.602209   20906 logs.go:276] 2 containers: [ba4be4c164d3 878adc2fd22f]
	I0923 04:33:15.602281   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:33:15.613451   20906 logs.go:276] 2 containers: [540584041ca0 3ce2ac2fc4e2]
	I0923 04:33:15.613531   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:33:15.623630   20906 logs.go:276] 1 containers: [4b25be594587]
	I0923 04:33:15.623715   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:33:15.633709   20906 logs.go:276] 2 containers: [ee52514e5b96 2143f864eac3]
	I0923 04:33:15.633788   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:33:15.643872   20906 logs.go:276] 1 containers: [54a06cb05e6b]
	I0923 04:33:15.643952   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:33:15.654307   20906 logs.go:276] 2 containers: [417011b5ad54 280cd681106d]
	I0923 04:33:15.654383   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:33:15.668843   20906 logs.go:276] 0 containers: []
	W0923 04:33:15.668859   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:33:15.668920   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:33:15.681425   20906 logs.go:276] 2 containers: [f0829c6e70e7 7b391cd17bc7]
	I0923 04:33:15.681445   20906 logs.go:123] Gathering logs for kube-apiserver [878adc2fd22f] ...
	I0923 04:33:15.681450   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878adc2fd22f"
	I0923 04:33:15.706141   20906 logs.go:123] Gathering logs for etcd [540584041ca0] ...
	I0923 04:33:15.706152   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 540584041ca0"
	I0923 04:33:15.720389   20906 logs.go:123] Gathering logs for kube-proxy [54a06cb05e6b] ...
	I0923 04:33:15.720399   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54a06cb05e6b"
	I0923 04:33:15.734462   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:33:15.734476   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:33:15.757073   20906 logs.go:123] Gathering logs for kube-scheduler [ee52514e5b96] ...
	I0923 04:33:15.757080   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee52514e5b96"
	I0923 04:33:15.768622   20906 logs.go:123] Gathering logs for storage-provisioner [f0829c6e70e7] ...
	I0923 04:33:15.768633   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0829c6e70e7"
	I0923 04:33:15.779904   20906 logs.go:123] Gathering logs for coredns [4b25be594587] ...
	I0923 04:33:15.779919   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b25be594587"
	I0923 04:33:15.791376   20906 logs.go:123] Gathering logs for kube-scheduler [2143f864eac3] ...
	I0923 04:33:15.791388   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2143f864eac3"
	I0923 04:33:15.806895   20906 logs.go:123] Gathering logs for kube-controller-manager [417011b5ad54] ...
	I0923 04:33:15.806909   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 417011b5ad54"
	I0923 04:33:15.824609   20906 logs.go:123] Gathering logs for kube-controller-manager [280cd681106d] ...
	I0923 04:33:15.824619   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 280cd681106d"
	I0923 04:33:15.836734   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:33:15.836744   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:33:15.875074   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:33:15.875085   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:33:15.879759   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:33:15.879766   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:33:15.915727   20906 logs.go:123] Gathering logs for etcd [3ce2ac2fc4e2] ...
	I0923 04:33:15.915743   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce2ac2fc4e2"
	I0923 04:33:15.930666   20906 logs.go:123] Gathering logs for storage-provisioner [7b391cd17bc7] ...
	I0923 04:33:15.930679   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b391cd17bc7"
	I0923 04:33:15.942043   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:33:15.942053   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:33:15.955064   20906 logs.go:123] Gathering logs for kube-apiserver [ba4be4c164d3] ...
	I0923 04:33:15.955080   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4be4c164d3"
	I0923 04:33:18.471216   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:33:23.473582   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:33:23.473712   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:33:23.488022   20906 logs.go:276] 2 containers: [ba4be4c164d3 878adc2fd22f]
	I0923 04:33:23.488119   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:33:23.499772   20906 logs.go:276] 2 containers: [540584041ca0 3ce2ac2fc4e2]
	I0923 04:33:23.499859   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:33:23.509834   20906 logs.go:276] 1 containers: [4b25be594587]
	I0923 04:33:23.509917   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:33:23.520005   20906 logs.go:276] 2 containers: [ee52514e5b96 2143f864eac3]
	I0923 04:33:23.520089   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:33:23.530284   20906 logs.go:276] 1 containers: [54a06cb05e6b]
	I0923 04:33:23.530366   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:33:23.540715   20906 logs.go:276] 2 containers: [417011b5ad54 280cd681106d]
	I0923 04:33:23.540791   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:33:23.550431   20906 logs.go:276] 0 containers: []
	W0923 04:33:23.550443   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:33:23.550529   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:33:23.560968   20906 logs.go:276] 2 containers: [f0829c6e70e7 7b391cd17bc7]
	I0923 04:33:23.560986   20906 logs.go:123] Gathering logs for kube-apiserver [878adc2fd22f] ...
	I0923 04:33:23.560991   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878adc2fd22f"
	I0923 04:33:23.586253   20906 logs.go:123] Gathering logs for kube-scheduler [2143f864eac3] ...
	I0923 04:33:23.586264   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2143f864eac3"
	I0923 04:33:23.601716   20906 logs.go:123] Gathering logs for kube-proxy [54a06cb05e6b] ...
	I0923 04:33:23.601730   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54a06cb05e6b"
	I0923 04:33:23.613401   20906 logs.go:123] Gathering logs for storage-provisioner [f0829c6e70e7] ...
	I0923 04:33:23.613414   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0829c6e70e7"
	I0923 04:33:23.625104   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:33:23.625116   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:33:23.646893   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:33:23.646900   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:33:23.651331   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:33:23.651339   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:33:23.686191   20906 logs.go:123] Gathering logs for etcd [540584041ca0] ...
	I0923 04:33:23.686201   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 540584041ca0"
	I0923 04:33:23.700648   20906 logs.go:123] Gathering logs for etcd [3ce2ac2fc4e2] ...
	I0923 04:33:23.700658   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce2ac2fc4e2"
	I0923 04:33:23.715197   20906 logs.go:123] Gathering logs for kube-scheduler [ee52514e5b96] ...
	I0923 04:33:23.715207   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee52514e5b96"
	I0923 04:33:23.732405   20906 logs.go:123] Gathering logs for storage-provisioner [7b391cd17bc7] ...
	I0923 04:33:23.732417   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b391cd17bc7"
	I0923 04:33:23.744335   20906 logs.go:123] Gathering logs for kube-apiserver [ba4be4c164d3] ...
	I0923 04:33:23.744344   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4be4c164d3"
	I0923 04:33:23.762406   20906 logs.go:123] Gathering logs for coredns [4b25be594587] ...
	I0923 04:33:23.762416   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b25be594587"
	I0923 04:33:23.773929   20906 logs.go:123] Gathering logs for kube-controller-manager [417011b5ad54] ...
	I0923 04:33:23.773942   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 417011b5ad54"
	I0923 04:33:23.800957   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:33:23.800966   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:33:23.813501   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:33:23.813512   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:33:23.850599   20906 logs.go:123] Gathering logs for kube-controller-manager [280cd681106d] ...
	I0923 04:33:23.850609   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 280cd681106d"
	I0923 04:33:26.364670   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:33:31.366908   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:33:31.367248   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:33:31.404148   20906 logs.go:276] 2 containers: [ba4be4c164d3 878adc2fd22f]
	I0923 04:33:31.404269   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:33:31.420474   20906 logs.go:276] 2 containers: [540584041ca0 3ce2ac2fc4e2]
	I0923 04:33:31.420578   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:33:31.433874   20906 logs.go:276] 1 containers: [4b25be594587]
	I0923 04:33:31.433953   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:33:31.444826   20906 logs.go:276] 2 containers: [ee52514e5b96 2143f864eac3]
	I0923 04:33:31.444912   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:33:31.455758   20906 logs.go:276] 1 containers: [54a06cb05e6b]
	I0923 04:33:31.455868   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:33:31.466515   20906 logs.go:276] 2 containers: [417011b5ad54 280cd681106d]
	I0923 04:33:31.466602   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:33:31.476397   20906 logs.go:276] 0 containers: []
	W0923 04:33:31.476407   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:33:31.476472   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:33:31.486849   20906 logs.go:276] 2 containers: [f0829c6e70e7 7b391cd17bc7]
	I0923 04:33:31.486868   20906 logs.go:123] Gathering logs for kube-controller-manager [280cd681106d] ...
	I0923 04:33:31.486873   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 280cd681106d"
	I0923 04:33:31.498736   20906 logs.go:123] Gathering logs for storage-provisioner [7b391cd17bc7] ...
	I0923 04:33:31.498748   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b391cd17bc7"
	I0923 04:33:31.510390   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:33:31.510402   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:33:31.522600   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:33:31.522615   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:33:31.527215   20906 logs.go:123] Gathering logs for etcd [540584041ca0] ...
	I0923 04:33:31.527221   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 540584041ca0"
	I0923 04:33:31.541901   20906 logs.go:123] Gathering logs for coredns [4b25be594587] ...
	I0923 04:33:31.541911   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b25be594587"
	I0923 04:33:31.553147   20906 logs.go:123] Gathering logs for kube-controller-manager [417011b5ad54] ...
	I0923 04:33:31.553156   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 417011b5ad54"
	I0923 04:33:31.574829   20906 logs.go:123] Gathering logs for kube-proxy [54a06cb05e6b] ...
	I0923 04:33:31.574839   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54a06cb05e6b"
	I0923 04:33:31.587356   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:33:31.587367   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:33:31.610896   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:33:31.610904   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:33:31.645370   20906 logs.go:123] Gathering logs for etcd [3ce2ac2fc4e2] ...
	I0923 04:33:31.645381   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce2ac2fc4e2"
	I0923 04:33:31.659667   20906 logs.go:123] Gathering logs for kube-scheduler [ee52514e5b96] ...
	I0923 04:33:31.659677   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee52514e5b96"
	I0923 04:33:31.671747   20906 logs.go:123] Gathering logs for kube-scheduler [2143f864eac3] ...
	I0923 04:33:31.671760   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2143f864eac3"
	I0923 04:33:31.687053   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:33:31.687063   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:33:31.725822   20906 logs.go:123] Gathering logs for kube-apiserver [878adc2fd22f] ...
	I0923 04:33:31.725830   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878adc2fd22f"
	I0923 04:33:31.752100   20906 logs.go:123] Gathering logs for kube-apiserver [ba4be4c164d3] ...
	I0923 04:33:31.752109   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4be4c164d3"
	I0923 04:33:31.766512   20906 logs.go:123] Gathering logs for storage-provisioner [f0829c6e70e7] ...
	I0923 04:33:31.766523   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0829c6e70e7"
	I0923 04:33:34.280052   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:33:39.282413   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:33:39.282659   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:33:39.305506   20906 logs.go:276] 2 containers: [ba4be4c164d3 878adc2fd22f]
	I0923 04:33:39.305628   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:33:39.322504   20906 logs.go:276] 2 containers: [540584041ca0 3ce2ac2fc4e2]
	I0923 04:33:39.322591   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:33:39.335481   20906 logs.go:276] 1 containers: [4b25be594587]
	I0923 04:33:39.335566   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:33:39.353908   20906 logs.go:276] 2 containers: [ee52514e5b96 2143f864eac3]
	I0923 04:33:39.353989   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:33:39.364069   20906 logs.go:276] 1 containers: [54a06cb05e6b]
	I0923 04:33:39.364155   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:33:39.374625   20906 logs.go:276] 2 containers: [417011b5ad54 280cd681106d]
	I0923 04:33:39.374705   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:33:39.385103   20906 logs.go:276] 0 containers: []
	W0923 04:33:39.385113   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:33:39.385175   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:33:39.395176   20906 logs.go:276] 2 containers: [f0829c6e70e7 7b391cd17bc7]
	I0923 04:33:39.395192   20906 logs.go:123] Gathering logs for kube-apiserver [ba4be4c164d3] ...
	I0923 04:33:39.395197   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4be4c164d3"
	I0923 04:33:39.408836   20906 logs.go:123] Gathering logs for etcd [540584041ca0] ...
	I0923 04:33:39.408845   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 540584041ca0"
	I0923 04:33:39.422080   20906 logs.go:123] Gathering logs for kube-scheduler [2143f864eac3] ...
	I0923 04:33:39.422090   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2143f864eac3"
	I0923 04:33:39.444556   20906 logs.go:123] Gathering logs for kube-controller-manager [280cd681106d] ...
	I0923 04:33:39.444566   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 280cd681106d"
	I0923 04:33:39.456417   20906 logs.go:123] Gathering logs for storage-provisioner [f0829c6e70e7] ...
	I0923 04:33:39.456427   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0829c6e70e7"
	I0923 04:33:39.467991   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:33:39.468000   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:33:39.490314   20906 logs.go:123] Gathering logs for coredns [4b25be594587] ...
	I0923 04:33:39.490324   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b25be594587"
	I0923 04:33:39.512656   20906 logs.go:123] Gathering logs for kube-scheduler [ee52514e5b96] ...
	I0923 04:33:39.512671   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee52514e5b96"
	I0923 04:33:39.525370   20906 logs.go:123] Gathering logs for kube-proxy [54a06cb05e6b] ...
	I0923 04:33:39.525386   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54a06cb05e6b"
	I0923 04:33:39.537515   20906 logs.go:123] Gathering logs for kube-controller-manager [417011b5ad54] ...
	I0923 04:33:39.537526   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 417011b5ad54"
	I0923 04:33:39.554137   20906 logs.go:123] Gathering logs for etcd [3ce2ac2fc4e2] ...
	I0923 04:33:39.554154   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce2ac2fc4e2"
	I0923 04:33:39.568333   20906 logs.go:123] Gathering logs for storage-provisioner [7b391cd17bc7] ...
	I0923 04:33:39.568343   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b391cd17bc7"
	I0923 04:33:39.579347   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:33:39.579359   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:33:39.591412   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:33:39.591427   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:33:39.629577   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:33:39.629586   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:33:39.633549   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:33:39.633558   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:33:39.669035   20906 logs.go:123] Gathering logs for kube-apiserver [878adc2fd22f] ...
	I0923 04:33:39.669046   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878adc2fd22f"
	I0923 04:33:42.206043   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:33:47.208730   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:33:47.208922   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:33:47.221230   20906 logs.go:276] 2 containers: [ba4be4c164d3 878adc2fd22f]
	I0923 04:33:47.221321   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:33:47.232403   20906 logs.go:276] 2 containers: [540584041ca0 3ce2ac2fc4e2]
	I0923 04:33:47.232493   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:33:47.243149   20906 logs.go:276] 1 containers: [4b25be594587]
	I0923 04:33:47.243238   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:33:47.253387   20906 logs.go:276] 2 containers: [ee52514e5b96 2143f864eac3]
	I0923 04:33:47.253465   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:33:47.264367   20906 logs.go:276] 1 containers: [54a06cb05e6b]
	I0923 04:33:47.264455   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:33:47.274745   20906 logs.go:276] 2 containers: [417011b5ad54 280cd681106d]
	I0923 04:33:47.274827   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:33:47.285571   20906 logs.go:276] 0 containers: []
	W0923 04:33:47.285582   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:33:47.285655   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:33:47.296544   20906 logs.go:276] 2 containers: [f0829c6e70e7 7b391cd17bc7]
	I0923 04:33:47.296564   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:33:47.296570   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:33:47.331586   20906 logs.go:123] Gathering logs for kube-apiserver [ba4be4c164d3] ...
	I0923 04:33:47.331602   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4be4c164d3"
	I0923 04:33:47.345245   20906 logs.go:123] Gathering logs for coredns [4b25be594587] ...
	I0923 04:33:47.345256   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b25be594587"
	I0923 04:33:47.356018   20906 logs.go:123] Gathering logs for kube-scheduler [ee52514e5b96] ...
	I0923 04:33:47.356027   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee52514e5b96"
	I0923 04:33:47.369741   20906 logs.go:123] Gathering logs for kube-proxy [54a06cb05e6b] ...
	I0923 04:33:47.369751   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54a06cb05e6b"
	I0923 04:33:47.382242   20906 logs.go:123] Gathering logs for storage-provisioner [f0829c6e70e7] ...
	I0923 04:33:47.382257   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0829c6e70e7"
	I0923 04:33:47.393816   20906 logs.go:123] Gathering logs for storage-provisioner [7b391cd17bc7] ...
	I0923 04:33:47.393826   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b391cd17bc7"
	I0923 04:33:47.407198   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:33:47.407210   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:33:47.445358   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:33:47.445365   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:33:47.468393   20906 logs.go:123] Gathering logs for kube-controller-manager [417011b5ad54] ...
	I0923 04:33:47.468410   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 417011b5ad54"
	I0923 04:33:47.504018   20906 logs.go:123] Gathering logs for kube-apiserver [878adc2fd22f] ...
	I0923 04:33:47.504037   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878adc2fd22f"
	I0923 04:33:47.538706   20906 logs.go:123] Gathering logs for kube-controller-manager [280cd681106d] ...
	I0923 04:33:47.538717   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 280cd681106d"
	I0923 04:33:47.550399   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:33:47.550415   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:33:47.562108   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:33:47.562119   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:33:47.566929   20906 logs.go:123] Gathering logs for etcd [3ce2ac2fc4e2] ...
	I0923 04:33:47.566936   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce2ac2fc4e2"
	I0923 04:33:47.584389   20906 logs.go:123] Gathering logs for kube-scheduler [2143f864eac3] ...
	I0923 04:33:47.584403   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2143f864eac3"
	I0923 04:33:47.600131   20906 logs.go:123] Gathering logs for etcd [540584041ca0] ...
	I0923 04:33:47.600142   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 540584041ca0"
	I0923 04:33:50.116066   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:33:55.118304   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:33:55.118558   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:33:55.149982   20906 logs.go:276] 2 containers: [ba4be4c164d3 878adc2fd22f]
	I0923 04:33:55.150098   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:33:55.167527   20906 logs.go:276] 2 containers: [540584041ca0 3ce2ac2fc4e2]
	I0923 04:33:55.167634   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:33:55.180972   20906 logs.go:276] 1 containers: [4b25be594587]
	I0923 04:33:55.181059   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:33:55.192686   20906 logs.go:276] 2 containers: [ee52514e5b96 2143f864eac3]
	I0923 04:33:55.192769   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:33:55.203405   20906 logs.go:276] 1 containers: [54a06cb05e6b]
	I0923 04:33:55.203484   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:33:55.214561   20906 logs.go:276] 2 containers: [417011b5ad54 280cd681106d]
	I0923 04:33:55.214642   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:33:55.224633   20906 logs.go:276] 0 containers: []
	W0923 04:33:55.224645   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:33:55.224708   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:33:55.234516   20906 logs.go:276] 2 containers: [f0829c6e70e7 7b391cd17bc7]
	I0923 04:33:55.234534   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:33:55.234540   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:33:55.256626   20906 logs.go:123] Gathering logs for etcd [3ce2ac2fc4e2] ...
	I0923 04:33:55.256634   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3ce2ac2fc4e2"
	I0923 04:33:55.272347   20906 logs.go:123] Gathering logs for kube-scheduler [2143f864eac3] ...
	I0923 04:33:55.272356   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2143f864eac3"
	I0923 04:33:55.287294   20906 logs.go:123] Gathering logs for kube-controller-manager [280cd681106d] ...
	I0923 04:33:55.287309   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 280cd681106d"
	I0923 04:33:55.299304   20906 logs.go:123] Gathering logs for kube-apiserver [ba4be4c164d3] ...
	I0923 04:33:55.299318   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba4be4c164d3"
	I0923 04:33:55.314385   20906 logs.go:123] Gathering logs for etcd [540584041ca0] ...
	I0923 04:33:55.314401   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 540584041ca0"
	I0923 04:33:55.328922   20906 logs.go:123] Gathering logs for kube-proxy [54a06cb05e6b] ...
	I0923 04:33:55.328931   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54a06cb05e6b"
	I0923 04:33:55.340526   20906 logs.go:123] Gathering logs for storage-provisioner [f0829c6e70e7] ...
	I0923 04:33:55.340542   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0829c6e70e7"
	I0923 04:33:55.352210   20906 logs.go:123] Gathering logs for storage-provisioner [7b391cd17bc7] ...
	I0923 04:33:55.352220   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b391cd17bc7"
	I0923 04:33:55.363983   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:33:55.363995   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:33:55.368225   20906 logs.go:123] Gathering logs for kube-apiserver [878adc2fd22f] ...
	I0923 04:33:55.368234   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878adc2fd22f"
	I0923 04:33:55.398910   20906 logs.go:123] Gathering logs for kube-controller-manager [417011b5ad54] ...
	I0923 04:33:55.398919   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 417011b5ad54"
	I0923 04:33:55.418337   20906 logs.go:123] Gathering logs for kube-scheduler [ee52514e5b96] ...
	I0923 04:33:55.418354   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee52514e5b96"
	I0923 04:33:55.431196   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:33:55.431208   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:33:55.443180   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:33:55.443190   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:33:55.490626   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:33:55.490634   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:33:55.526729   20906 logs.go:123] Gathering logs for coredns [4b25be594587] ...
	I0923 04:33:55.526745   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b25be594587"
	I0923 04:33:58.040054   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:34:03.042432   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:34:03.042515   20906 kubeadm.go:597] duration metric: took 4m4.43969075s to restartPrimaryControlPlane
	W0923 04:34:03.042582   20906 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0923 04:34:03.042612   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0923 04:34:04.070341   20906 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.027722291s)
	I0923 04:34:04.070412   20906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 04:34:04.075534   20906 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 04:34:04.078360   20906 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 04:34:04.081202   20906 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 04:34:04.081209   20906 kubeadm.go:157] found existing configuration files:
	
	I0923 04:34:04.081234   20906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53273 /etc/kubernetes/admin.conf
	I0923 04:34:04.083759   20906 kubeadm.go:163] "https://control-plane.minikube.internal:53273" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53273 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 04:34:04.083788   20906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 04:34:04.086356   20906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53273 /etc/kubernetes/kubelet.conf
	I0923 04:34:04.089321   20906 kubeadm.go:163] "https://control-plane.minikube.internal:53273" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53273 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 04:34:04.089346   20906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 04:34:04.092030   20906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53273 /etc/kubernetes/controller-manager.conf
	I0923 04:34:04.094526   20906 kubeadm.go:163] "https://control-plane.minikube.internal:53273" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53273 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 04:34:04.094554   20906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 04:34:04.097744   20906 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53273 /etc/kubernetes/scheduler.conf
	I0923 04:34:04.100866   20906 kubeadm.go:163] "https://control-plane.minikube.internal:53273" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53273 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 04:34:04.100894   20906 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 04:34:04.103377   20906 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 04:34:04.120482   20906 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0923 04:34:04.120510   20906 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 04:34:04.169112   20906 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 04:34:04.169174   20906 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 04:34:04.169243   20906 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0923 04:34:04.217440   20906 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 04:34:04.222585   20906 out.go:235]   - Generating certificates and keys ...
	I0923 04:34:04.222617   20906 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 04:34:04.222651   20906 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 04:34:04.222702   20906 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0923 04:34:04.222738   20906 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0923 04:34:04.222779   20906 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0923 04:34:04.222811   20906 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0923 04:34:04.222852   20906 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0923 04:34:04.222886   20906 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0923 04:34:04.222927   20906 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0923 04:34:04.222970   20906 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0923 04:34:04.222991   20906 kubeadm.go:310] [certs] Using the existing "sa" key
	I0923 04:34:04.223022   20906 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 04:34:04.374215   20906 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 04:34:04.419855   20906 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 04:34:04.619206   20906 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 04:34:04.684372   20906 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 04:34:04.716663   20906 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 04:34:04.717096   20906 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 04:34:04.717250   20906 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 04:34:04.804720   20906 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 04:34:04.808717   20906 out.go:235]   - Booting up control plane ...
	I0923 04:34:04.808764   20906 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 04:34:04.808799   20906 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 04:34:04.808835   20906 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 04:34:04.809043   20906 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 04:34:04.810342   20906 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0923 04:34:09.313203   20906 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502529 seconds
	I0923 04:34:09.313264   20906 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 04:34:09.316586   20906 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 04:34:09.842857   20906 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 04:34:09.843169   20906 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-231000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 04:34:10.346134   20906 kubeadm.go:310] [bootstrap-token] Using token: l1h5q2.q8axi452higk507f
	I0923 04:34:10.350878   20906 out.go:235]   - Configuring RBAC rules ...
	I0923 04:34:10.350940   20906 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 04:34:10.350982   20906 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 04:34:10.353073   20906 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 04:34:10.357459   20906 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 04:34:10.358451   20906 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 04:34:10.359394   20906 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 04:34:10.362708   20906 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 04:34:10.545251   20906 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 04:34:10.750529   20906 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 04:34:10.750906   20906 kubeadm.go:310] 
	I0923 04:34:10.750942   20906 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 04:34:10.750945   20906 kubeadm.go:310] 
	I0923 04:34:10.750978   20906 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 04:34:10.750981   20906 kubeadm.go:310] 
	I0923 04:34:10.751001   20906 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 04:34:10.751061   20906 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 04:34:10.751085   20906 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 04:34:10.751088   20906 kubeadm.go:310] 
	I0923 04:34:10.751117   20906 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 04:34:10.751121   20906 kubeadm.go:310] 
	I0923 04:34:10.751141   20906 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 04:34:10.751144   20906 kubeadm.go:310] 
	I0923 04:34:10.751168   20906 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 04:34:10.751204   20906 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 04:34:10.751246   20906 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 04:34:10.751251   20906 kubeadm.go:310] 
	I0923 04:34:10.751295   20906 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 04:34:10.751339   20906 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 04:34:10.751345   20906 kubeadm.go:310] 
	I0923 04:34:10.751384   20906 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token l1h5q2.q8axi452higk507f \
	I0923 04:34:10.751435   20906 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a5393725c1ebf724a26137eacec694c8d322652550455bc31dd6da673086408b \
	I0923 04:34:10.751449   20906 kubeadm.go:310] 	--control-plane 
	I0923 04:34:10.751452   20906 kubeadm.go:310] 
	I0923 04:34:10.751502   20906 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 04:34:10.751508   20906 kubeadm.go:310] 
	I0923 04:34:10.751546   20906 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token l1h5q2.q8axi452higk507f \
	I0923 04:34:10.751592   20906 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a5393725c1ebf724a26137eacec694c8d322652550455bc31dd6da673086408b 
	I0923 04:34:10.751915   20906 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 04:34:10.751925   20906 cni.go:84] Creating CNI manager for ""
	I0923 04:34:10.751933   20906 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 04:34:10.756732   20906 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0923 04:34:10.763627   20906 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0923 04:34:10.766480   20906 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0923 04:34:10.772420   20906 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 04:34:10.772491   20906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 04:34:10.772515   20906 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-231000 minikube.k8s.io/updated_at=2024_09_23T04_34_10_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1 minikube.k8s.io/name=stopped-upgrade-231000 minikube.k8s.io/primary=true
	I0923 04:34:10.810907   20906 ops.go:34] apiserver oom_adj: -16
	I0923 04:34:10.810941   20906 kubeadm.go:1113] duration metric: took 38.510167ms to wait for elevateKubeSystemPrivileges
	I0923 04:34:10.814287   20906 kubeadm.go:394] duration metric: took 4m12.22471875s to StartCluster
	I0923 04:34:10.814301   20906 settings.go:142] acquiring lock: {Name:mkf31abe3bf81ad5b4da1674523af9683936735a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 04:34:10.814468   20906 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:34:10.814857   20906 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19690-18362/kubeconfig: {Name:mke35d42fdea9892a3eb00f2ea9c8fc1f44681bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 04:34:10.815051   20906 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:34:10.815103   20906 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0923 04:34:10.815140   20906 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-231000"
	I0923 04:34:10.815151   20906 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-231000"
	W0923 04:34:10.815153   20906 addons.go:243] addon storage-provisioner should already be in state true
	I0923 04:34:10.815152   20906 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-231000"
	I0923 04:34:10.815158   20906 config.go:182] Loaded profile config "stopped-upgrade-231000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0923 04:34:10.815161   20906 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-231000"
	I0923 04:34:10.815163   20906 host.go:66] Checking if "stopped-upgrade-231000" exists ...
	I0923 04:34:10.815637   20906 retry.go:31] will retry after 761.291901ms: connect: dial unix /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/stopped-upgrade-231000/monitor: connect: connection refused
	I0923 04:34:10.816311   20906 kapi.go:59] client config for stopped-upgrade-231000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/stopped-upgrade-231000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/stopped-upgrade-231000/client.key", CAFile:"/Users/jenkins/minikube-integration/19690-18362/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1044f6030), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 04:34:10.816436   20906 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-231000"
	W0923 04:34:10.816440   20906 addons.go:243] addon default-storageclass should already be in state true
	I0923 04:34:10.816447   20906 host.go:66] Checking if "stopped-upgrade-231000" exists ...
	I0923 04:34:10.816958   20906 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 04:34:10.816962   20906 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 04:34:10.816967   20906 sshutil.go:53] new ssh client: &{IP:localhost Port:53238 SSHKeyPath:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/stopped-upgrade-231000/id_rsa Username:docker}
	I0923 04:34:10.819599   20906 out.go:177] * Verifying Kubernetes components...
	I0923 04:34:10.826628   20906 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 04:34:10.900057   20906 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 04:34:10.905176   20906 api_server.go:52] waiting for apiserver process to appear ...
	I0923 04:34:10.905229   20906 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 04:34:10.907108   20906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 04:34:10.911078   20906 api_server.go:72] duration metric: took 96.007958ms to wait for apiserver process to appear ...
	I0923 04:34:10.911114   20906 api_server.go:88] waiting for apiserver healthz status ...
	I0923 04:34:10.911130   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:34:11.227423   20906 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0923 04:34:11.227435   20906 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0923 04:34:11.584380   20906 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 04:34:11.588390   20906 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 04:34:11.588402   20906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 04:34:11.588411   20906 sshutil.go:53] new ssh client: &{IP:localhost Port:53238 SSHKeyPath:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/stopped-upgrade-231000/id_rsa Username:docker}
	I0923 04:34:11.620798   20906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 04:34:15.913197   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:34:15.913212   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:34:20.913431   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:34:20.913450   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:34:25.913706   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:34:25.913733   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:34:30.914127   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:34:30.914188   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:34:35.914855   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:34:35.914895   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:34:40.915611   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:34:40.915634   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0923 04:34:41.227824   20906 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0923 04:34:41.232309   20906 out.go:177] * Enabled addons: storage-provisioner
	I0923 04:34:41.240183   20906 addons.go:510] duration metric: took 30.425215459s for enable addons: enabled=[storage-provisioner]
	I0923 04:34:45.916532   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:34:45.916553   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:34:50.917248   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:34:50.917297   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:34:55.918611   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:34:55.918639   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:35:00.920278   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:35:00.920297   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:35:05.921178   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:35:05.921212   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:35:10.921577   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:35:10.921704   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:35:10.933439   20906 logs.go:276] 1 containers: [a3fb0e58ca56]
	I0923 04:35:10.933523   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:35:10.943653   20906 logs.go:276] 1 containers: [0d6e927b9d61]
	I0923 04:35:10.943739   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:35:10.954423   20906 logs.go:276] 2 containers: [a08ca05660a6 91da5fadb655]
	I0923 04:35:10.954508   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:35:10.965431   20906 logs.go:276] 1 containers: [dcabd7d718ba]
	I0923 04:35:10.965517   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:35:10.976789   20906 logs.go:276] 1 containers: [6b83b2b20bbf]
	I0923 04:35:10.976862   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:35:10.987336   20906 logs.go:276] 1 containers: [f14ef81a399a]
	I0923 04:35:10.987412   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:35:10.997182   20906 logs.go:276] 0 containers: []
	W0923 04:35:10.997194   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:35:10.997267   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:35:11.007269   20906 logs.go:276] 1 containers: [8ef37136745e]
	I0923 04:35:11.007286   20906 logs.go:123] Gathering logs for coredns [91da5fadb655] ...
	I0923 04:35:11.007291   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91da5fadb655"
	I0923 04:35:11.018899   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:35:11.018910   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:35:11.042977   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:35:11.042985   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:35:11.054339   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:35:11.054351   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:35:11.059418   20906 logs.go:123] Gathering logs for kube-apiserver [a3fb0e58ca56] ...
	I0923 04:35:11.059426   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fb0e58ca56"
	I0923 04:35:11.074082   20906 logs.go:123] Gathering logs for etcd [0d6e927b9d61] ...
	I0923 04:35:11.074096   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6e927b9d61"
	I0923 04:35:11.088208   20906 logs.go:123] Gathering logs for kube-scheduler [dcabd7d718ba] ...
	I0923 04:35:11.088224   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabd7d718ba"
	I0923 04:35:11.103556   20906 logs.go:123] Gathering logs for kube-proxy [6b83b2b20bbf] ...
	I0923 04:35:11.103570   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b83b2b20bbf"
	I0923 04:35:11.115405   20906 logs.go:123] Gathering logs for kube-controller-manager [f14ef81a399a] ...
	I0923 04:35:11.115413   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f14ef81a399a"
	I0923 04:35:11.133195   20906 logs.go:123] Gathering logs for storage-provisioner [8ef37136745e] ...
	I0923 04:35:11.133205   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef37136745e"
	I0923 04:35:11.144791   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:35:11.144800   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:35:11.177739   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:35:11.177748   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:35:11.212259   20906 logs.go:123] Gathering logs for coredns [a08ca05660a6] ...
	I0923 04:35:11.212272   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a08ca05660a6"
	I0923 04:35:13.726326   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:35:18.728473   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:35:18.728621   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:35:18.742524   20906 logs.go:276] 1 containers: [a3fb0e58ca56]
	I0923 04:35:18.742631   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:35:18.753752   20906 logs.go:276] 1 containers: [0d6e927b9d61]
	I0923 04:35:18.753831   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:35:18.763997   20906 logs.go:276] 2 containers: [a08ca05660a6 91da5fadb655]
	I0923 04:35:18.764077   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:35:18.774860   20906 logs.go:276] 1 containers: [dcabd7d718ba]
	I0923 04:35:18.774943   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:35:18.786334   20906 logs.go:276] 1 containers: [6b83b2b20bbf]
	I0923 04:35:18.786412   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:35:18.797240   20906 logs.go:276] 1 containers: [f14ef81a399a]
	I0923 04:35:18.797316   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:35:18.807450   20906 logs.go:276] 0 containers: []
	W0923 04:35:18.807460   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:35:18.807523   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:35:18.817992   20906 logs.go:276] 1 containers: [8ef37136745e]
	I0923 04:35:18.818008   20906 logs.go:123] Gathering logs for kube-proxy [6b83b2b20bbf] ...
	I0923 04:35:18.818013   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b83b2b20bbf"
	I0923 04:35:18.829596   20906 logs.go:123] Gathering logs for kube-controller-manager [f14ef81a399a] ...
	I0923 04:35:18.829606   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f14ef81a399a"
	I0923 04:35:18.847837   20906 logs.go:123] Gathering logs for storage-provisioner [8ef37136745e] ...
	I0923 04:35:18.847854   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef37136745e"
	I0923 04:35:18.859690   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:35:18.859701   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:35:18.871063   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:35:18.871078   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:35:18.906423   20906 logs.go:123] Gathering logs for etcd [0d6e927b9d61] ...
	I0923 04:35:18.906440   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6e927b9d61"
	I0923 04:35:18.920589   20906 logs.go:123] Gathering logs for coredns [91da5fadb655] ...
	I0923 04:35:18.920602   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91da5fadb655"
	I0923 04:35:18.933842   20906 logs.go:123] Gathering logs for kube-scheduler [dcabd7d718ba] ...
	I0923 04:35:18.933856   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabd7d718ba"
	I0923 04:35:18.949057   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:35:18.949071   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:35:18.972279   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:35:18.972287   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:35:19.005037   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:35:19.005044   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:35:19.009048   20906 logs.go:123] Gathering logs for kube-apiserver [a3fb0e58ca56] ...
	I0923 04:35:19.009055   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fb0e58ca56"
	I0923 04:35:19.023337   20906 logs.go:123] Gathering logs for coredns [a08ca05660a6] ...
	I0923 04:35:19.023348   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a08ca05660a6"
	I0923 04:35:21.537182   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:35:26.539617   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:35:26.539916   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:35:26.564048   20906 logs.go:276] 1 containers: [a3fb0e58ca56]
	I0923 04:35:26.564185   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:35:26.579798   20906 logs.go:276] 1 containers: [0d6e927b9d61]
	I0923 04:35:26.579905   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:35:26.591636   20906 logs.go:276] 2 containers: [a08ca05660a6 91da5fadb655]
	I0923 04:35:26.591722   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:35:26.604444   20906 logs.go:276] 1 containers: [dcabd7d718ba]
	I0923 04:35:26.604524   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:35:26.614952   20906 logs.go:276] 1 containers: [6b83b2b20bbf]
	I0923 04:35:26.615044   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:35:26.627152   20906 logs.go:276] 1 containers: [f14ef81a399a]
	I0923 04:35:26.627228   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:35:26.637212   20906 logs.go:276] 0 containers: []
	W0923 04:35:26.637225   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:35:26.637294   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:35:26.647965   20906 logs.go:276] 1 containers: [8ef37136745e]
	I0923 04:35:26.647980   20906 logs.go:123] Gathering logs for coredns [91da5fadb655] ...
	I0923 04:35:26.647985   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91da5fadb655"
	I0923 04:35:26.659533   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:35:26.659544   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:35:26.663925   20906 logs.go:123] Gathering logs for kube-apiserver [a3fb0e58ca56] ...
	I0923 04:35:26.663930   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fb0e58ca56"
	I0923 04:35:26.678389   20906 logs.go:123] Gathering logs for coredns [a08ca05660a6] ...
	I0923 04:35:26.678400   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a08ca05660a6"
	I0923 04:35:26.690078   20906 logs.go:123] Gathering logs for kube-scheduler [dcabd7d718ba] ...
	I0923 04:35:26.690088   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabd7d718ba"
	I0923 04:35:26.705175   20906 logs.go:123] Gathering logs for kube-proxy [6b83b2b20bbf] ...
	I0923 04:35:26.705185   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b83b2b20bbf"
	I0923 04:35:26.716913   20906 logs.go:123] Gathering logs for kube-controller-manager [f14ef81a399a] ...
	I0923 04:35:26.716928   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f14ef81a399a"
	I0923 04:35:26.734423   20906 logs.go:123] Gathering logs for storage-provisioner [8ef37136745e] ...
	I0923 04:35:26.734432   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef37136745e"
	I0923 04:35:26.749435   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:35:26.749451   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:35:26.773970   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:35:26.773977   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:35:26.808122   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:35:26.808129   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:35:26.843814   20906 logs.go:123] Gathering logs for etcd [0d6e927b9d61] ...
	I0923 04:35:26.843830   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6e927b9d61"
	I0923 04:35:26.857897   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:35:26.857914   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:35:29.371955   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:35:34.374240   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:35:34.374413   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:35:34.387233   20906 logs.go:276] 1 containers: [a3fb0e58ca56]
	I0923 04:35:34.387330   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:35:34.397830   20906 logs.go:276] 1 containers: [0d6e927b9d61]
	I0923 04:35:34.397907   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:35:34.407960   20906 logs.go:276] 2 containers: [a08ca05660a6 91da5fadb655]
	I0923 04:35:34.408047   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:35:34.418203   20906 logs.go:276] 1 containers: [dcabd7d718ba]
	I0923 04:35:34.418278   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:35:34.428285   20906 logs.go:276] 1 containers: [6b83b2b20bbf]
	I0923 04:35:34.428364   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:35:34.438325   20906 logs.go:276] 1 containers: [f14ef81a399a]
	I0923 04:35:34.438403   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:35:34.448673   20906 logs.go:276] 0 containers: []
	W0923 04:35:34.448691   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:35:34.448767   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:35:34.459010   20906 logs.go:276] 1 containers: [8ef37136745e]
	I0923 04:35:34.459027   20906 logs.go:123] Gathering logs for storage-provisioner [8ef37136745e] ...
	I0923 04:35:34.459032   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef37136745e"
	I0923 04:35:34.470341   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:35:34.470352   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:35:34.495802   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:35:34.495810   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:35:34.507558   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:35:34.507569   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:35:34.542558   20906 logs.go:123] Gathering logs for coredns [a08ca05660a6] ...
	I0923 04:35:34.542567   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a08ca05660a6"
	I0923 04:35:34.553814   20906 logs.go:123] Gathering logs for kube-scheduler [dcabd7d718ba] ...
	I0923 04:35:34.553823   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabd7d718ba"
	I0923 04:35:34.569155   20906 logs.go:123] Gathering logs for kube-proxy [6b83b2b20bbf] ...
	I0923 04:35:34.569169   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b83b2b20bbf"
	I0923 04:35:34.580819   20906 logs.go:123] Gathering logs for coredns [91da5fadb655] ...
	I0923 04:35:34.580830   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91da5fadb655"
	I0923 04:35:34.592783   20906 logs.go:123] Gathering logs for kube-controller-manager [f14ef81a399a] ...
	I0923 04:35:34.592794   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f14ef81a399a"
	I0923 04:35:34.610260   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:35:34.610270   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:35:34.615217   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:35:34.615223   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:35:34.652280   20906 logs.go:123] Gathering logs for kube-apiserver [a3fb0e58ca56] ...
	I0923 04:35:34.652294   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fb0e58ca56"
	I0923 04:35:34.666446   20906 logs.go:123] Gathering logs for etcd [0d6e927b9d61] ...
	I0923 04:35:34.666461   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6e927b9d61"
	I0923 04:35:37.190047   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:35:42.192217   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:35:42.192358   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:35:42.203780   20906 logs.go:276] 1 containers: [a3fb0e58ca56]
	I0923 04:35:42.203874   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:35:42.214181   20906 logs.go:276] 1 containers: [0d6e927b9d61]
	I0923 04:35:42.214260   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:35:42.225279   20906 logs.go:276] 2 containers: [a08ca05660a6 91da5fadb655]
	I0923 04:35:42.225360   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:35:42.235397   20906 logs.go:276] 1 containers: [dcabd7d718ba]
	I0923 04:35:42.235480   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:35:42.246384   20906 logs.go:276] 1 containers: [6b83b2b20bbf]
	I0923 04:35:42.246467   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:35:42.256647   20906 logs.go:276] 1 containers: [f14ef81a399a]
	I0923 04:35:42.256731   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:35:42.266872   20906 logs.go:276] 0 containers: []
	W0923 04:35:42.266888   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:35:42.266954   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:35:42.277485   20906 logs.go:276] 1 containers: [8ef37136745e]
	I0923 04:35:42.277502   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:35:42.277507   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:35:42.303198   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:35:42.303207   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:35:42.338621   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:35:42.338631   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:35:42.343247   20906 logs.go:123] Gathering logs for coredns [91da5fadb655] ...
	I0923 04:35:42.343254   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91da5fadb655"
	I0923 04:35:42.354620   20906 logs.go:123] Gathering logs for kube-scheduler [dcabd7d718ba] ...
	I0923 04:35:42.354630   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabd7d718ba"
	I0923 04:35:42.369678   20906 logs.go:123] Gathering logs for kube-proxy [6b83b2b20bbf] ...
	I0923 04:35:42.369695   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b83b2b20bbf"
	I0923 04:35:42.381397   20906 logs.go:123] Gathering logs for kube-controller-manager [f14ef81a399a] ...
	I0923 04:35:42.381406   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f14ef81a399a"
	I0923 04:35:42.398876   20906 logs.go:123] Gathering logs for storage-provisioner [8ef37136745e] ...
	I0923 04:35:42.398887   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef37136745e"
	I0923 04:35:42.410799   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:35:42.410809   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:35:42.448266   20906 logs.go:123] Gathering logs for kube-apiserver [a3fb0e58ca56] ...
	I0923 04:35:42.448276   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fb0e58ca56"
	I0923 04:35:42.462801   20906 logs.go:123] Gathering logs for etcd [0d6e927b9d61] ...
	I0923 04:35:42.462811   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6e927b9d61"
	I0923 04:35:42.477026   20906 logs.go:123] Gathering logs for coredns [a08ca05660a6] ...
	I0923 04:35:42.477035   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a08ca05660a6"
	I0923 04:35:42.488395   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:35:42.488411   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:35:45.001990   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:35:50.002583   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:35:50.002784   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:35:50.021141   20906 logs.go:276] 1 containers: [a3fb0e58ca56]
	I0923 04:35:50.021244   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:35:50.033408   20906 logs.go:276] 1 containers: [0d6e927b9d61]
	I0923 04:35:50.033493   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:35:50.043860   20906 logs.go:276] 2 containers: [a08ca05660a6 91da5fadb655]
	I0923 04:35:50.043941   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:35:50.054489   20906 logs.go:276] 1 containers: [dcabd7d718ba]
	I0923 04:35:50.054559   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:35:50.064669   20906 logs.go:276] 1 containers: [6b83b2b20bbf]
	I0923 04:35:50.064757   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:35:50.076559   20906 logs.go:276] 1 containers: [f14ef81a399a]
	I0923 04:35:50.076637   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:35:50.086915   20906 logs.go:276] 0 containers: []
	W0923 04:35:50.086963   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:35:50.087033   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:35:50.098845   20906 logs.go:276] 1 containers: [8ef37136745e]
	I0923 04:35:50.098859   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:35:50.098864   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:35:50.133430   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:35:50.133438   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:35:50.137215   20906 logs.go:123] Gathering logs for kube-apiserver [a3fb0e58ca56] ...
	I0923 04:35:50.137220   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fb0e58ca56"
	I0923 04:35:50.151289   20906 logs.go:123] Gathering logs for kube-scheduler [dcabd7d718ba] ...
	I0923 04:35:50.151301   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabd7d718ba"
	I0923 04:35:50.166446   20906 logs.go:123] Gathering logs for kube-controller-manager [f14ef81a399a] ...
	I0923 04:35:50.166455   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f14ef81a399a"
	I0923 04:35:50.183967   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:35:50.183980   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:35:50.207629   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:35:50.207638   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:35:50.218576   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:35:50.218586   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:35:50.253280   20906 logs.go:123] Gathering logs for etcd [0d6e927b9d61] ...
	I0923 04:35:50.253294   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6e927b9d61"
	I0923 04:35:50.267105   20906 logs.go:123] Gathering logs for coredns [a08ca05660a6] ...
	I0923 04:35:50.267120   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a08ca05660a6"
	I0923 04:35:50.279295   20906 logs.go:123] Gathering logs for coredns [91da5fadb655] ...
	I0923 04:35:50.279306   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91da5fadb655"
	I0923 04:35:50.291263   20906 logs.go:123] Gathering logs for kube-proxy [6b83b2b20bbf] ...
	I0923 04:35:50.291273   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b83b2b20bbf"
	I0923 04:35:50.303090   20906 logs.go:123] Gathering logs for storage-provisioner [8ef37136745e] ...
	I0923 04:35:50.303098   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef37136745e"
	I0923 04:35:52.816572   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:35:57.818833   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:35:57.819028   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:35:57.837781   20906 logs.go:276] 1 containers: [a3fb0e58ca56]
	I0923 04:35:57.837894   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:35:57.852118   20906 logs.go:276] 1 containers: [0d6e927b9d61]
	I0923 04:35:57.852214   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:35:57.864151   20906 logs.go:276] 2 containers: [a08ca05660a6 91da5fadb655]
	I0923 04:35:57.864238   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:35:57.877055   20906 logs.go:276] 1 containers: [dcabd7d718ba]
	I0923 04:35:57.877136   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:35:57.887548   20906 logs.go:276] 1 containers: [6b83b2b20bbf]
	I0923 04:35:57.887625   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:35:57.898012   20906 logs.go:276] 1 containers: [f14ef81a399a]
	I0923 04:35:57.898100   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:35:57.908465   20906 logs.go:276] 0 containers: []
	W0923 04:35:57.908475   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:35:57.908545   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:35:57.918594   20906 logs.go:276] 1 containers: [8ef37136745e]
	I0923 04:35:57.918610   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:35:57.918616   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:35:57.952223   20906 logs.go:123] Gathering logs for kube-scheduler [dcabd7d718ba] ...
	I0923 04:35:57.952231   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabd7d718ba"
	I0923 04:35:57.967734   20906 logs.go:123] Gathering logs for kube-proxy [6b83b2b20bbf] ...
	I0923 04:35:57.967749   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b83b2b20bbf"
	I0923 04:35:57.979643   20906 logs.go:123] Gathering logs for kube-controller-manager [f14ef81a399a] ...
	I0923 04:35:57.979657   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f14ef81a399a"
	I0923 04:35:57.997537   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:35:57.997545   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:35:58.021257   20906 logs.go:123] Gathering logs for storage-provisioner [8ef37136745e] ...
	I0923 04:35:58.021266   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef37136745e"
	I0923 04:35:58.040218   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:35:58.040229   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:35:58.052155   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:35:58.052166   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:35:58.056231   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:35:58.056238   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:35:58.090679   20906 logs.go:123] Gathering logs for kube-apiserver [a3fb0e58ca56] ...
	I0923 04:35:58.090694   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fb0e58ca56"
	I0923 04:35:58.105414   20906 logs.go:123] Gathering logs for etcd [0d6e927b9d61] ...
	I0923 04:35:58.105428   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6e927b9d61"
	I0923 04:35:58.118939   20906 logs.go:123] Gathering logs for coredns [a08ca05660a6] ...
	I0923 04:35:58.118953   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a08ca05660a6"
	I0923 04:35:58.131699   20906 logs.go:123] Gathering logs for coredns [91da5fadb655] ...
	I0923 04:35:58.131709   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91da5fadb655"
	I0923 04:36:00.645338   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:36:05.647637   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:36:05.647862   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:36:05.669504   20906 logs.go:276] 1 containers: [a3fb0e58ca56]
	I0923 04:36:05.669593   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:36:05.681591   20906 logs.go:276] 1 containers: [0d6e927b9d61]
	I0923 04:36:05.681670   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:36:05.691874   20906 logs.go:276] 2 containers: [a08ca05660a6 91da5fadb655]
	I0923 04:36:05.691944   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:36:05.701946   20906 logs.go:276] 1 containers: [dcabd7d718ba]
	I0923 04:36:05.702019   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:36:05.717058   20906 logs.go:276] 1 containers: [6b83b2b20bbf]
	I0923 04:36:05.717148   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:36:05.727290   20906 logs.go:276] 1 containers: [f14ef81a399a]
	I0923 04:36:05.727374   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:36:05.738135   20906 logs.go:276] 0 containers: []
	W0923 04:36:05.738151   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:36:05.738226   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:36:05.749692   20906 logs.go:276] 1 containers: [8ef37136745e]
	I0923 04:36:05.749707   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:36:05.749713   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:36:05.773322   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:36:05.773330   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:36:05.785349   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:36:05.785363   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:36:05.819573   20906 logs.go:123] Gathering logs for kube-apiserver [a3fb0e58ca56] ...
	I0923 04:36:05.819584   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fb0e58ca56"
	I0923 04:36:05.834121   20906 logs.go:123] Gathering logs for kube-controller-manager [f14ef81a399a] ...
	I0923 04:36:05.834135   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f14ef81a399a"
	I0923 04:36:05.851508   20906 logs.go:123] Gathering logs for storage-provisioner [8ef37136745e] ...
	I0923 04:36:05.851521   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef37136745e"
	I0923 04:36:05.862890   20906 logs.go:123] Gathering logs for coredns [91da5fadb655] ...
	I0923 04:36:05.862900   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91da5fadb655"
	I0923 04:36:05.875103   20906 logs.go:123] Gathering logs for kube-scheduler [dcabd7d718ba] ...
	I0923 04:36:05.875113   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabd7d718ba"
	I0923 04:36:05.890570   20906 logs.go:123] Gathering logs for kube-proxy [6b83b2b20bbf] ...
	I0923 04:36:05.890580   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b83b2b20bbf"
	I0923 04:36:05.907301   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:36:05.907314   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:36:05.911387   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:36:05.911395   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:36:05.946313   20906 logs.go:123] Gathering logs for etcd [0d6e927b9d61] ...
	I0923 04:36:05.946324   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6e927b9d61"
	I0923 04:36:05.960209   20906 logs.go:123] Gathering logs for coredns [a08ca05660a6] ...
	I0923 04:36:05.960222   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a08ca05660a6"
	I0923 04:36:08.473967   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:36:13.475998   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:36:13.476187   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:36:13.491848   20906 logs.go:276] 1 containers: [a3fb0e58ca56]
	I0923 04:36:13.491947   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:36:13.503818   20906 logs.go:276] 1 containers: [0d6e927b9d61]
	I0923 04:36:13.503901   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:36:13.514760   20906 logs.go:276] 2 containers: [a08ca05660a6 91da5fadb655]
	I0923 04:36:13.514846   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:36:13.525435   20906 logs.go:276] 1 containers: [dcabd7d718ba]
	I0923 04:36:13.525522   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:36:13.539689   20906 logs.go:276] 1 containers: [6b83b2b20bbf]
	I0923 04:36:13.539765   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:36:13.550550   20906 logs.go:276] 1 containers: [f14ef81a399a]
	I0923 04:36:13.550630   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:36:13.564530   20906 logs.go:276] 0 containers: []
	W0923 04:36:13.564543   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:36:13.564608   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:36:13.575399   20906 logs.go:276] 1 containers: [8ef37136745e]
	I0923 04:36:13.575415   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:36:13.575420   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:36:13.610607   20906 logs.go:123] Gathering logs for kube-apiserver [a3fb0e58ca56] ...
	I0923 04:36:13.610618   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fb0e58ca56"
	I0923 04:36:13.624608   20906 logs.go:123] Gathering logs for coredns [91da5fadb655] ...
	I0923 04:36:13.624617   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91da5fadb655"
	I0923 04:36:13.640949   20906 logs.go:123] Gathering logs for kube-scheduler [dcabd7d718ba] ...
	I0923 04:36:13.640960   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabd7d718ba"
	I0923 04:36:13.656083   20906 logs.go:123] Gathering logs for kube-controller-manager [f14ef81a399a] ...
	I0923 04:36:13.656092   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f14ef81a399a"
	I0923 04:36:13.673447   20906 logs.go:123] Gathering logs for storage-provisioner [8ef37136745e] ...
	I0923 04:36:13.673463   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef37136745e"
	I0923 04:36:13.686154   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:36:13.686164   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:36:13.719348   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:36:13.719359   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:36:13.723817   20906 logs.go:123] Gathering logs for etcd [0d6e927b9d61] ...
	I0923 04:36:13.723824   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6e927b9d61"
	I0923 04:36:13.738705   20906 logs.go:123] Gathering logs for coredns [a08ca05660a6] ...
	I0923 04:36:13.738716   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a08ca05660a6"
	I0923 04:36:13.749935   20906 logs.go:123] Gathering logs for kube-proxy [6b83b2b20bbf] ...
	I0923 04:36:13.749951   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b83b2b20bbf"
	I0923 04:36:13.762319   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:36:13.762330   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:36:13.787413   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:36:13.787427   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:36:16.300780   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:36:21.303112   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:36:21.303625   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:36:21.345371   20906 logs.go:276] 1 containers: [a3fb0e58ca56]
	I0923 04:36:21.345540   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:36:21.367378   20906 logs.go:276] 1 containers: [0d6e927b9d61]
	I0923 04:36:21.367498   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:36:21.383013   20906 logs.go:276] 2 containers: [a08ca05660a6 91da5fadb655]
	I0923 04:36:21.383116   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:36:21.396185   20906 logs.go:276] 1 containers: [dcabd7d718ba]
	I0923 04:36:21.396274   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:36:21.412481   20906 logs.go:276] 1 containers: [6b83b2b20bbf]
	I0923 04:36:21.412569   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:36:21.423082   20906 logs.go:276] 1 containers: [f14ef81a399a]
	I0923 04:36:21.423163   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:36:21.433435   20906 logs.go:276] 0 containers: []
	W0923 04:36:21.433447   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:36:21.433515   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:36:21.444599   20906 logs.go:276] 1 containers: [8ef37136745e]
	I0923 04:36:21.444616   20906 logs.go:123] Gathering logs for coredns [a08ca05660a6] ...
	I0923 04:36:21.444622   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a08ca05660a6"
	I0923 04:36:21.456909   20906 logs.go:123] Gathering logs for kube-scheduler [dcabd7d718ba] ...
	I0923 04:36:21.456924   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabd7d718ba"
	I0923 04:36:21.472854   20906 logs.go:123] Gathering logs for kube-proxy [6b83b2b20bbf] ...
	I0923 04:36:21.472864   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b83b2b20bbf"
	I0923 04:36:21.484505   20906 logs.go:123] Gathering logs for kube-controller-manager [f14ef81a399a] ...
	I0923 04:36:21.484519   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f14ef81a399a"
	I0923 04:36:21.506045   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:36:21.506055   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:36:21.541400   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:36:21.541409   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:36:21.576876   20906 logs.go:123] Gathering logs for kube-apiserver [a3fb0e58ca56] ...
	I0923 04:36:21.576886   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fb0e58ca56"
	I0923 04:36:21.591829   20906 logs.go:123] Gathering logs for etcd [0d6e927b9d61] ...
	I0923 04:36:21.591839   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6e927b9d61"
	I0923 04:36:21.606075   20906 logs.go:123] Gathering logs for coredns [91da5fadb655] ...
	I0923 04:36:21.606083   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91da5fadb655"
	I0923 04:36:21.617530   20906 logs.go:123] Gathering logs for storage-provisioner [8ef37136745e] ...
	I0923 04:36:21.617546   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef37136745e"
	I0923 04:36:21.629335   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:36:21.629350   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:36:21.652600   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:36:21.652610   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:36:21.668253   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:36:21.668268   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:36:24.174656   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:36:29.177399   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:36:29.177639   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:36:29.192669   20906 logs.go:276] 1 containers: [a3fb0e58ca56]
	I0923 04:36:29.192772   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:36:29.204227   20906 logs.go:276] 1 containers: [0d6e927b9d61]
	I0923 04:36:29.204299   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:36:29.219266   20906 logs.go:276] 4 containers: [d7e0fb6fd0fc 0c63ff4ad54f a08ca05660a6 91da5fadb655]
	I0923 04:36:29.219341   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:36:29.230075   20906 logs.go:276] 1 containers: [dcabd7d718ba]
	I0923 04:36:29.230163   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:36:29.242479   20906 logs.go:276] 1 containers: [6b83b2b20bbf]
	I0923 04:36:29.242562   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:36:29.252874   20906 logs.go:276] 1 containers: [f14ef81a399a]
	I0923 04:36:29.252952   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:36:29.262661   20906 logs.go:276] 0 containers: []
	W0923 04:36:29.262675   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:36:29.262744   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:36:29.273240   20906 logs.go:276] 1 containers: [8ef37136745e]
	I0923 04:36:29.273263   20906 logs.go:123] Gathering logs for coredns [91da5fadb655] ...
	I0923 04:36:29.273269   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91da5fadb655"
	I0923 04:36:29.284937   20906 logs.go:123] Gathering logs for kube-scheduler [dcabd7d718ba] ...
	I0923 04:36:29.284950   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabd7d718ba"
	I0923 04:36:29.305666   20906 logs.go:123] Gathering logs for storage-provisioner [8ef37136745e] ...
	I0923 04:36:29.305676   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef37136745e"
	I0923 04:36:29.316892   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:36:29.316907   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:36:29.328905   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:36:29.328915   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:36:29.364501   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:36:29.364510   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:36:29.368605   20906 logs.go:123] Gathering logs for kube-apiserver [a3fb0e58ca56] ...
	I0923 04:36:29.368613   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fb0e58ca56"
	I0923 04:36:29.383092   20906 logs.go:123] Gathering logs for coredns [d7e0fb6fd0fc] ...
	I0923 04:36:29.383102   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7e0fb6fd0fc"
	I0923 04:36:29.395301   20906 logs.go:123] Gathering logs for kube-controller-manager [f14ef81a399a] ...
	I0923 04:36:29.395313   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f14ef81a399a"
	I0923 04:36:29.414183   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:36:29.414192   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:36:29.438841   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:36:29.438853   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:36:29.472644   20906 logs.go:123] Gathering logs for etcd [0d6e927b9d61] ...
	I0923 04:36:29.472660   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6e927b9d61"
	I0923 04:36:29.486479   20906 logs.go:123] Gathering logs for coredns [0c63ff4ad54f] ...
	I0923 04:36:29.486493   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c63ff4ad54f"
	I0923 04:36:29.498701   20906 logs.go:123] Gathering logs for coredns [a08ca05660a6] ...
	I0923 04:36:29.498711   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a08ca05660a6"
	I0923 04:36:29.510365   20906 logs.go:123] Gathering logs for kube-proxy [6b83b2b20bbf] ...
	I0923 04:36:29.510381   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b83b2b20bbf"
	I0923 04:36:32.023638   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:36:37.025865   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:36:37.026156   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:36:37.052645   20906 logs.go:276] 1 containers: [a3fb0e58ca56]
	I0923 04:36:37.052793   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:36:37.071133   20906 logs.go:276] 1 containers: [0d6e927b9d61]
	I0923 04:36:37.071234   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:36:37.084764   20906 logs.go:276] 4 containers: [d7e0fb6fd0fc 0c63ff4ad54f a08ca05660a6 91da5fadb655]
	I0923 04:36:37.084853   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:36:37.096376   20906 logs.go:276] 1 containers: [dcabd7d718ba]
	I0923 04:36:37.096459   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:36:37.107102   20906 logs.go:276] 1 containers: [6b83b2b20bbf]
	I0923 04:36:37.107182   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:36:37.117603   20906 logs.go:276] 1 containers: [f14ef81a399a]
	I0923 04:36:37.117681   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:36:37.131746   20906 logs.go:276] 0 containers: []
	W0923 04:36:37.131760   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:36:37.131833   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:36:37.142061   20906 logs.go:276] 1 containers: [8ef37136745e]
	I0923 04:36:37.142079   20906 logs.go:123] Gathering logs for etcd [0d6e927b9d61] ...
	I0923 04:36:37.142084   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6e927b9d61"
	I0923 04:36:37.162777   20906 logs.go:123] Gathering logs for coredns [d7e0fb6fd0fc] ...
	I0923 04:36:37.162786   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7e0fb6fd0fc"
	I0923 04:36:37.173945   20906 logs.go:123] Gathering logs for storage-provisioner [8ef37136745e] ...
	I0923 04:36:37.173956   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef37136745e"
	I0923 04:36:37.190185   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:36:37.190197   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:36:37.223565   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:36:37.223573   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:36:37.227703   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:36:37.227710   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:36:37.251059   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:36:37.251066   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:36:37.262185   20906 logs.go:123] Gathering logs for kube-apiserver [a3fb0e58ca56] ...
	I0923 04:36:37.262200   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fb0e58ca56"
	I0923 04:36:37.277529   20906 logs.go:123] Gathering logs for coredns [a08ca05660a6] ...
	I0923 04:36:37.277541   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a08ca05660a6"
	I0923 04:36:37.290279   20906 logs.go:123] Gathering logs for coredns [91da5fadb655] ...
	I0923 04:36:37.290290   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91da5fadb655"
	I0923 04:36:37.308809   20906 logs.go:123] Gathering logs for kube-scheduler [dcabd7d718ba] ...
	I0923 04:36:37.308825   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabd7d718ba"
	I0923 04:36:37.324562   20906 logs.go:123] Gathering logs for kube-controller-manager [f14ef81a399a] ...
	I0923 04:36:37.324572   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f14ef81a399a"
	I0923 04:36:37.343657   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:36:37.343667   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:36:37.386184   20906 logs.go:123] Gathering logs for coredns [0c63ff4ad54f] ...
	I0923 04:36:37.386197   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c63ff4ad54f"
	I0923 04:36:37.399250   20906 logs.go:123] Gathering logs for kube-proxy [6b83b2b20bbf] ...
	I0923 04:36:37.399261   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b83b2b20bbf"
	I0923 04:36:39.912494   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:36:44.912767   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:36:44.913017   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:36:44.928695   20906 logs.go:276] 1 containers: [a3fb0e58ca56]
	I0923 04:36:44.928791   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:36:44.942024   20906 logs.go:276] 1 containers: [0d6e927b9d61]
	I0923 04:36:44.942105   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:36:44.953425   20906 logs.go:276] 4 containers: [d7e0fb6fd0fc 0c63ff4ad54f a08ca05660a6 91da5fadb655]
	I0923 04:36:44.953506   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:36:44.964076   20906 logs.go:276] 1 containers: [dcabd7d718ba]
	I0923 04:36:44.964165   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:36:44.975215   20906 logs.go:276] 1 containers: [6b83b2b20bbf]
	I0923 04:36:44.975289   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:36:44.992296   20906 logs.go:276] 1 containers: [f14ef81a399a]
	I0923 04:36:44.992379   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:36:45.004224   20906 logs.go:276] 0 containers: []
	W0923 04:36:45.004235   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:36:45.004293   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:36:45.014808   20906 logs.go:276] 1 containers: [8ef37136745e]
	I0923 04:36:45.014828   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:36:45.014833   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:36:45.049830   20906 logs.go:123] Gathering logs for coredns [d7e0fb6fd0fc] ...
	I0923 04:36:45.049840   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7e0fb6fd0fc"
	I0923 04:36:45.061870   20906 logs.go:123] Gathering logs for coredns [0c63ff4ad54f] ...
	I0923 04:36:45.061880   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c63ff4ad54f"
	I0923 04:36:45.073111   20906 logs.go:123] Gathering logs for kube-controller-manager [f14ef81a399a] ...
	I0923 04:36:45.073124   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f14ef81a399a"
	I0923 04:36:45.091253   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:36:45.091262   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:36:45.095620   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:36:45.095627   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:36:45.118949   20906 logs.go:123] Gathering logs for storage-provisioner [8ef37136745e] ...
	I0923 04:36:45.118957   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef37136745e"
	I0923 04:36:45.130683   20906 logs.go:123] Gathering logs for etcd [0d6e927b9d61] ...
	I0923 04:36:45.130699   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6e927b9d61"
	I0923 04:36:45.144819   20906 logs.go:123] Gathering logs for coredns [a08ca05660a6] ...
	I0923 04:36:45.144829   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a08ca05660a6"
	I0923 04:36:45.156362   20906 logs.go:123] Gathering logs for coredns [91da5fadb655] ...
	I0923 04:36:45.156372   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91da5fadb655"
	I0923 04:36:45.168128   20906 logs.go:123] Gathering logs for kube-scheduler [dcabd7d718ba] ...
	I0923 04:36:45.168138   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabd7d718ba"
	I0923 04:36:45.183265   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:36:45.183278   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:36:45.194851   20906 logs.go:123] Gathering logs for kube-apiserver [a3fb0e58ca56] ...
	I0923 04:36:45.194862   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fb0e58ca56"
	I0923 04:36:45.214978   20906 logs.go:123] Gathering logs for kube-proxy [6b83b2b20bbf] ...
	I0923 04:36:45.214988   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b83b2b20bbf"
	I0923 04:36:45.226764   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:36:45.226774   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:36:47.764091   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:36:52.766409   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:36:52.766691   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:36:52.787645   20906 logs.go:276] 1 containers: [a3fb0e58ca56]
	I0923 04:36:52.787769   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:36:52.802057   20906 logs.go:276] 1 containers: [0d6e927b9d61]
	I0923 04:36:52.802153   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:36:52.814742   20906 logs.go:276] 4 containers: [d7e0fb6fd0fc 0c63ff4ad54f a08ca05660a6 91da5fadb655]
	I0923 04:36:52.814827   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:36:52.826084   20906 logs.go:276] 1 containers: [dcabd7d718ba]
	I0923 04:36:52.826160   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:36:52.836234   20906 logs.go:276] 1 containers: [6b83b2b20bbf]
	I0923 04:36:52.836317   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:36:52.848167   20906 logs.go:276] 1 containers: [f14ef81a399a]
	I0923 04:36:52.848246   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:36:52.858225   20906 logs.go:276] 0 containers: []
	W0923 04:36:52.858239   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:36:52.858309   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:36:52.869324   20906 logs.go:276] 1 containers: [8ef37136745e]
	I0923 04:36:52.869344   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:36:52.869350   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:36:52.893808   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:36:52.893817   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:36:52.931306   20906 logs.go:123] Gathering logs for coredns [d7e0fb6fd0fc] ...
	I0923 04:36:52.931319   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7e0fb6fd0fc"
	I0923 04:36:52.943205   20906 logs.go:123] Gathering logs for kube-scheduler [dcabd7d718ba] ...
	I0923 04:36:52.943217   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabd7d718ba"
	I0923 04:36:52.958626   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:36:52.958635   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:36:52.993703   20906 logs.go:123] Gathering logs for storage-provisioner [8ef37136745e] ...
	I0923 04:36:52.993711   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef37136745e"
	I0923 04:36:53.008166   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:36:53.008180   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:36:53.019871   20906 logs.go:123] Gathering logs for kube-apiserver [a3fb0e58ca56] ...
	I0923 04:36:53.019883   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fb0e58ca56"
	I0923 04:36:53.034980   20906 logs.go:123] Gathering logs for etcd [0d6e927b9d61] ...
	I0923 04:36:53.034991   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6e927b9d61"
	I0923 04:36:53.048931   20906 logs.go:123] Gathering logs for coredns [91da5fadb655] ...
	I0923 04:36:53.048941   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91da5fadb655"
	I0923 04:36:53.060710   20906 logs.go:123] Gathering logs for kube-proxy [6b83b2b20bbf] ...
	I0923 04:36:53.060721   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b83b2b20bbf"
	I0923 04:36:53.072815   20906 logs.go:123] Gathering logs for kube-controller-manager [f14ef81a399a] ...
	I0923 04:36:53.072825   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f14ef81a399a"
	I0923 04:36:53.090647   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:36:53.090662   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:36:53.095419   20906 logs.go:123] Gathering logs for coredns [0c63ff4ad54f] ...
	I0923 04:36:53.095426   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c63ff4ad54f"
	I0923 04:36:53.106784   20906 logs.go:123] Gathering logs for coredns [a08ca05660a6] ...
	I0923 04:36:53.106793   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a08ca05660a6"
	I0923 04:36:55.627750   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:37:00.628554   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:37:00.628746   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:37:00.641049   20906 logs.go:276] 1 containers: [a3fb0e58ca56]
	I0923 04:37:00.641140   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:37:00.652559   20906 logs.go:276] 1 containers: [0d6e927b9d61]
	I0923 04:37:00.652643   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:37:00.663185   20906 logs.go:276] 4 containers: [d7e0fb6fd0fc 0c63ff4ad54f a08ca05660a6 91da5fadb655]
	I0923 04:37:00.663268   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:37:00.673711   20906 logs.go:276] 1 containers: [dcabd7d718ba]
	I0923 04:37:00.673794   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:37:00.684683   20906 logs.go:276] 1 containers: [6b83b2b20bbf]
	I0923 04:37:00.684767   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:37:00.695513   20906 logs.go:276] 1 containers: [f14ef81a399a]
	I0923 04:37:00.695591   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:37:00.706328   20906 logs.go:276] 0 containers: []
	W0923 04:37:00.706342   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:37:00.706409   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:37:00.716781   20906 logs.go:276] 1 containers: [8ef37136745e]
	I0923 04:37:00.716800   20906 logs.go:123] Gathering logs for kube-scheduler [dcabd7d718ba] ...
	I0923 04:37:00.716806   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabd7d718ba"
	I0923 04:37:00.732216   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:37:00.732231   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:37:00.765796   20906 logs.go:123] Gathering logs for kube-apiserver [a3fb0e58ca56] ...
	I0923 04:37:00.765808   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fb0e58ca56"
	I0923 04:37:00.780429   20906 logs.go:123] Gathering logs for coredns [a08ca05660a6] ...
	I0923 04:37:00.780440   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a08ca05660a6"
	I0923 04:37:00.793372   20906 logs.go:123] Gathering logs for kube-proxy [6b83b2b20bbf] ...
	I0923 04:37:00.793381   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b83b2b20bbf"
	I0923 04:37:00.806933   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:37:00.806945   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:37:00.818276   20906 logs.go:123] Gathering logs for etcd [0d6e927b9d61] ...
	I0923 04:37:00.818287   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6e927b9d61"
	I0923 04:37:00.832735   20906 logs.go:123] Gathering logs for coredns [d7e0fb6fd0fc] ...
	I0923 04:37:00.832746   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7e0fb6fd0fc"
	I0923 04:37:00.845253   20906 logs.go:123] Gathering logs for coredns [0c63ff4ad54f] ...
	I0923 04:37:00.845262   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c63ff4ad54f"
	I0923 04:37:00.856921   20906 logs.go:123] Gathering logs for storage-provisioner [8ef37136745e] ...
	I0923 04:37:00.856937   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef37136745e"
	I0923 04:37:00.872495   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:37:00.872509   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:37:00.905460   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:37:00.905468   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:37:00.910454   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:37:00.910461   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:37:00.936033   20906 logs.go:123] Gathering logs for coredns [91da5fadb655] ...
	I0923 04:37:00.936041   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91da5fadb655"
	I0923 04:37:00.947807   20906 logs.go:123] Gathering logs for kube-controller-manager [f14ef81a399a] ...
	I0923 04:37:00.947822   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f14ef81a399a"
	I0923 04:37:03.467220   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:37:08.469530   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:37:08.469750   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:37:08.491464   20906 logs.go:276] 1 containers: [a3fb0e58ca56]
	I0923 04:37:08.491586   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:37:08.505746   20906 logs.go:276] 1 containers: [0d6e927b9d61]
	I0923 04:37:08.505830   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:37:08.518248   20906 logs.go:276] 4 containers: [d7e0fb6fd0fc 0c63ff4ad54f a08ca05660a6 91da5fadb655]
	I0923 04:37:08.518328   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:37:08.529459   20906 logs.go:276] 1 containers: [dcabd7d718ba]
	I0923 04:37:08.529539   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:37:08.539910   20906 logs.go:276] 1 containers: [6b83b2b20bbf]
	I0923 04:37:08.539997   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:37:08.550313   20906 logs.go:276] 1 containers: [f14ef81a399a]
	I0923 04:37:08.550384   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:37:08.560819   20906 logs.go:276] 0 containers: []
	W0923 04:37:08.560832   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:37:08.560911   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:37:08.571171   20906 logs.go:276] 1 containers: [8ef37136745e]
	I0923 04:37:08.571187   20906 logs.go:123] Gathering logs for etcd [0d6e927b9d61] ...
	I0923 04:37:08.571192   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6e927b9d61"
	I0923 04:37:08.585139   20906 logs.go:123] Gathering logs for coredns [d7e0fb6fd0fc] ...
	I0923 04:37:08.585152   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7e0fb6fd0fc"
	I0923 04:37:08.597466   20906 logs.go:123] Gathering logs for kube-scheduler [dcabd7d718ba] ...
	I0923 04:37:08.597481   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabd7d718ba"
	I0923 04:37:08.612965   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:37:08.612979   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:37:08.617532   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:37:08.617540   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:37:08.642238   20906 logs.go:123] Gathering logs for storage-provisioner [8ef37136745e] ...
	I0923 04:37:08.642267   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef37136745e"
	I0923 04:37:08.656199   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:37:08.656215   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:37:08.692964   20906 logs.go:123] Gathering logs for kube-apiserver [a3fb0e58ca56] ...
	I0923 04:37:08.692975   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fb0e58ca56"
	I0923 04:37:08.707565   20906 logs.go:123] Gathering logs for coredns [91da5fadb655] ...
	I0923 04:37:08.707579   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91da5fadb655"
	I0923 04:37:08.721211   20906 logs.go:123] Gathering logs for kube-proxy [6b83b2b20bbf] ...
	I0923 04:37:08.721225   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b83b2b20bbf"
	I0923 04:37:08.748410   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:37:08.748423   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:37:08.768327   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:37:08.768341   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:37:08.802317   20906 logs.go:123] Gathering logs for coredns [0c63ff4ad54f] ...
	I0923 04:37:08.802327   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c63ff4ad54f"
	I0923 04:37:08.814340   20906 logs.go:123] Gathering logs for coredns [a08ca05660a6] ...
	I0923 04:37:08.814350   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a08ca05660a6"
	I0923 04:37:08.826695   20906 logs.go:123] Gathering logs for kube-controller-manager [f14ef81a399a] ...
	I0923 04:37:08.826708   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f14ef81a399a"
	I0923 04:37:11.346575   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:37:16.348960   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:37:16.349199   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:37:16.365797   20906 logs.go:276] 1 containers: [a3fb0e58ca56]
	I0923 04:37:16.365910   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:37:16.378579   20906 logs.go:276] 1 containers: [0d6e927b9d61]
	I0923 04:37:16.378666   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:37:16.389410   20906 logs.go:276] 4 containers: [d7e0fb6fd0fc 0c63ff4ad54f a08ca05660a6 91da5fadb655]
	I0923 04:37:16.389494   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:37:16.400257   20906 logs.go:276] 1 containers: [dcabd7d718ba]
	I0923 04:37:16.400343   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:37:16.411112   20906 logs.go:276] 1 containers: [6b83b2b20bbf]
	I0923 04:37:16.411193   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:37:16.422092   20906 logs.go:276] 1 containers: [f14ef81a399a]
	I0923 04:37:16.422169   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:37:16.432735   20906 logs.go:276] 0 containers: []
	W0923 04:37:16.432747   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:37:16.432816   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:37:16.442441   20906 logs.go:276] 1 containers: [8ef37136745e]
	I0923 04:37:16.442460   20906 logs.go:123] Gathering logs for coredns [a08ca05660a6] ...
	I0923 04:37:16.442465   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a08ca05660a6"
	I0923 04:37:16.454043   20906 logs.go:123] Gathering logs for kube-controller-manager [f14ef81a399a] ...
	I0923 04:37:16.454055   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f14ef81a399a"
	I0923 04:37:16.470983   20906 logs.go:123] Gathering logs for coredns [d7e0fb6fd0fc] ...
	I0923 04:37:16.470993   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7e0fb6fd0fc"
	I0923 04:37:16.482445   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:37:16.482458   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:37:16.520345   20906 logs.go:123] Gathering logs for coredns [91da5fadb655] ...
	I0923 04:37:16.520357   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91da5fadb655"
	I0923 04:37:16.538100   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:37:16.538116   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:37:16.542747   20906 logs.go:123] Gathering logs for kube-scheduler [dcabd7d718ba] ...
	I0923 04:37:16.542756   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabd7d718ba"
	I0923 04:37:16.562158   20906 logs.go:123] Gathering logs for kube-proxy [6b83b2b20bbf] ...
	I0923 04:37:16.562169   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b83b2b20bbf"
	I0923 04:37:16.573889   20906 logs.go:123] Gathering logs for storage-provisioner [8ef37136745e] ...
	I0923 04:37:16.573900   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef37136745e"
	I0923 04:37:16.585849   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:37:16.585860   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:37:16.597605   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:37:16.597616   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:37:16.632018   20906 logs.go:123] Gathering logs for etcd [0d6e927b9d61] ...
	I0923 04:37:16.632035   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6e927b9d61"
	I0923 04:37:16.646196   20906 logs.go:123] Gathering logs for coredns [0c63ff4ad54f] ...
	I0923 04:37:16.646207   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c63ff4ad54f"
	I0923 04:37:16.657614   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:37:16.657626   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:37:16.683473   20906 logs.go:123] Gathering logs for kube-apiserver [a3fb0e58ca56] ...
	I0923 04:37:16.683484   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fb0e58ca56"
	I0923 04:37:19.199238   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:37:24.200453   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:37:24.200569   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:37:24.211606   20906 logs.go:276] 1 containers: [a3fb0e58ca56]
	I0923 04:37:24.211695   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:37:24.222337   20906 logs.go:276] 1 containers: [0d6e927b9d61]
	I0923 04:37:24.222417   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:37:24.237638   20906 logs.go:276] 4 containers: [d7e0fb6fd0fc 0c63ff4ad54f a08ca05660a6 91da5fadb655]
	I0923 04:37:24.237732   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:37:24.248222   20906 logs.go:276] 1 containers: [dcabd7d718ba]
	I0923 04:37:24.248299   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:37:24.262197   20906 logs.go:276] 1 containers: [6b83b2b20bbf]
	I0923 04:37:24.262271   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:37:24.280930   20906 logs.go:276] 1 containers: [f14ef81a399a]
	I0923 04:37:24.281012   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:37:24.291080   20906 logs.go:276] 0 containers: []
	W0923 04:37:24.291093   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:37:24.291163   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:37:24.301773   20906 logs.go:276] 1 containers: [8ef37136745e]
	I0923 04:37:24.301814   20906 logs.go:123] Gathering logs for coredns [91da5fadb655] ...
	I0923 04:37:24.301822   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91da5fadb655"
	I0923 04:37:24.314208   20906 logs.go:123] Gathering logs for etcd [0d6e927b9d61] ...
	I0923 04:37:24.314219   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6e927b9d61"
	I0923 04:37:24.328470   20906 logs.go:123] Gathering logs for kube-apiserver [a3fb0e58ca56] ...
	I0923 04:37:24.328480   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fb0e58ca56"
	I0923 04:37:24.343719   20906 logs.go:123] Gathering logs for coredns [a08ca05660a6] ...
	I0923 04:37:24.343732   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a08ca05660a6"
	I0923 04:37:24.355109   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:37:24.355120   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:37:24.366844   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:37:24.366857   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:37:24.400302   20906 logs.go:123] Gathering logs for coredns [0c63ff4ad54f] ...
	I0923 04:37:24.400312   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c63ff4ad54f"
	I0923 04:37:24.412270   20906 logs.go:123] Gathering logs for kube-scheduler [dcabd7d718ba] ...
	I0923 04:37:24.412281   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabd7d718ba"
	I0923 04:37:24.431609   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:37:24.431621   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:37:24.456869   20906 logs.go:123] Gathering logs for coredns [d7e0fb6fd0fc] ...
	I0923 04:37:24.456880   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7e0fb6fd0fc"
	I0923 04:37:24.468139   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:37:24.468150   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:37:24.502253   20906 logs.go:123] Gathering logs for kube-proxy [6b83b2b20bbf] ...
	I0923 04:37:24.502264   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b83b2b20bbf"
	I0923 04:37:24.514338   20906 logs.go:123] Gathering logs for kube-controller-manager [f14ef81a399a] ...
	I0923 04:37:24.514348   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f14ef81a399a"
	I0923 04:37:24.532150   20906 logs.go:123] Gathering logs for storage-provisioner [8ef37136745e] ...
	I0923 04:37:24.532165   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef37136745e"
	I0923 04:37:24.548762   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:37:24.548776   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:37:27.054957   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:37:32.057296   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:37:32.057468   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:37:32.067962   20906 logs.go:276] 1 containers: [a3fb0e58ca56]
	I0923 04:37:32.068050   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:37:32.078873   20906 logs.go:276] 1 containers: [0d6e927b9d61]
	I0923 04:37:32.078969   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:37:32.089575   20906 logs.go:276] 4 containers: [d7e0fb6fd0fc 0c63ff4ad54f a08ca05660a6 91da5fadb655]
	I0923 04:37:32.089657   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:37:32.100847   20906 logs.go:276] 1 containers: [dcabd7d718ba]
	I0923 04:37:32.100928   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:37:32.111731   20906 logs.go:276] 1 containers: [6b83b2b20bbf]
	I0923 04:37:32.111811   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:37:32.122459   20906 logs.go:276] 1 containers: [f14ef81a399a]
	I0923 04:37:32.122537   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:37:32.133056   20906 logs.go:276] 0 containers: []
	W0923 04:37:32.133069   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:37:32.133134   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:37:32.143445   20906 logs.go:276] 1 containers: [8ef37136745e]
	I0923 04:37:32.143463   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:37:32.143468   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:37:32.177957   20906 logs.go:123] Gathering logs for etcd [0d6e927b9d61] ...
	I0923 04:37:32.177973   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6e927b9d61"
	I0923 04:37:32.193683   20906 logs.go:123] Gathering logs for coredns [d7e0fb6fd0fc] ...
	I0923 04:37:32.193696   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7e0fb6fd0fc"
	I0923 04:37:32.205204   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:37:32.205214   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:37:32.240310   20906 logs.go:123] Gathering logs for coredns [91da5fadb655] ...
	I0923 04:37:32.240318   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91da5fadb655"
	I0923 04:37:32.252472   20906 logs.go:123] Gathering logs for kube-controller-manager [f14ef81a399a] ...
	I0923 04:37:32.252487   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f14ef81a399a"
	I0923 04:37:32.271380   20906 logs.go:123] Gathering logs for coredns [a08ca05660a6] ...
	I0923 04:37:32.271393   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a08ca05660a6"
	I0923 04:37:32.282604   20906 logs.go:123] Gathering logs for coredns [0c63ff4ad54f] ...
	I0923 04:37:32.282619   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c63ff4ad54f"
	I0923 04:37:32.294359   20906 logs.go:123] Gathering logs for kube-proxy [6b83b2b20bbf] ...
	I0923 04:37:32.294369   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b83b2b20bbf"
	I0923 04:37:32.305974   20906 logs.go:123] Gathering logs for storage-provisioner [8ef37136745e] ...
	I0923 04:37:32.305988   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef37136745e"
	I0923 04:37:32.317596   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:37:32.317607   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:37:32.343484   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:37:32.343494   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:37:32.354820   20906 logs.go:123] Gathering logs for kube-apiserver [a3fb0e58ca56] ...
	I0923 04:37:32.354833   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fb0e58ca56"
	I0923 04:37:32.369172   20906 logs.go:123] Gathering logs for kube-scheduler [dcabd7d718ba] ...
	I0923 04:37:32.369188   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabd7d718ba"
	I0923 04:37:32.386540   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:37:32.386553   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:37:34.892001   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:37:39.894220   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:37:39.894423   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:37:39.908995   20906 logs.go:276] 1 containers: [a3fb0e58ca56]
	I0923 04:37:39.909095   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:37:39.920993   20906 logs.go:276] 1 containers: [0d6e927b9d61]
	I0923 04:37:39.921082   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:37:39.932470   20906 logs.go:276] 4 containers: [d7e0fb6fd0fc 0c63ff4ad54f a08ca05660a6 91da5fadb655]
	I0923 04:37:39.932554   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:37:39.942705   20906 logs.go:276] 1 containers: [dcabd7d718ba]
	I0923 04:37:39.942792   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:37:39.953304   20906 logs.go:276] 1 containers: [6b83b2b20bbf]
	I0923 04:37:39.953382   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:37:39.963640   20906 logs.go:276] 1 containers: [f14ef81a399a]
	I0923 04:37:39.963730   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:37:39.983720   20906 logs.go:276] 0 containers: []
	W0923 04:37:39.983731   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:37:39.983800   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:37:39.994417   20906 logs.go:276] 1 containers: [8ef37136745e]
	I0923 04:37:39.994436   20906 logs.go:123] Gathering logs for kube-controller-manager [f14ef81a399a] ...
	I0923 04:37:39.994442   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f14ef81a399a"
	I0923 04:37:40.011920   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:37:40.011932   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:37:40.026772   20906 logs.go:123] Gathering logs for coredns [0c63ff4ad54f] ...
	I0923 04:37:40.026787   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c63ff4ad54f"
	I0923 04:37:40.038530   20906 logs.go:123] Gathering logs for kube-apiserver [a3fb0e58ca56] ...
	I0923 04:37:40.038539   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fb0e58ca56"
	I0923 04:37:40.053743   20906 logs.go:123] Gathering logs for coredns [d7e0fb6fd0fc] ...
	I0923 04:37:40.053756   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7e0fb6fd0fc"
	I0923 04:37:40.066908   20906 logs.go:123] Gathering logs for kube-scheduler [dcabd7d718ba] ...
	I0923 04:37:40.066921   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabd7d718ba"
	I0923 04:37:40.083022   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:37:40.083033   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:37:40.117978   20906 logs.go:123] Gathering logs for coredns [a08ca05660a6] ...
	I0923 04:37:40.117987   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a08ca05660a6"
	I0923 04:37:40.135268   20906 logs.go:123] Gathering logs for etcd [0d6e927b9d61] ...
	I0923 04:37:40.135279   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6e927b9d61"
	I0923 04:37:40.149313   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:37:40.149322   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:37:40.186348   20906 logs.go:123] Gathering logs for coredns [91da5fadb655] ...
	I0923 04:37:40.186360   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91da5fadb655"
	I0923 04:37:40.198254   20906 logs.go:123] Gathering logs for kube-proxy [6b83b2b20bbf] ...
	I0923 04:37:40.198265   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b83b2b20bbf"
	I0923 04:37:40.210137   20906 logs.go:123] Gathering logs for storage-provisioner [8ef37136745e] ...
	I0923 04:37:40.210149   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef37136745e"
	I0923 04:37:40.222882   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:37:40.222895   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:37:40.248062   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:37:40.248071   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:37:42.754379   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:37:47.756668   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:37:47.756898   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:37:47.783985   20906 logs.go:276] 1 containers: [a3fb0e58ca56]
	I0923 04:37:47.784092   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:37:47.799902   20906 logs.go:276] 1 containers: [0d6e927b9d61]
	I0923 04:37:47.799989   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:37:47.815797   20906 logs.go:276] 4 containers: [d7e0fb6fd0fc 0c63ff4ad54f a08ca05660a6 91da5fadb655]
	I0923 04:37:47.815886   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:37:47.826451   20906 logs.go:276] 1 containers: [dcabd7d718ba]
	I0923 04:37:47.826535   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:37:47.837650   20906 logs.go:276] 1 containers: [6b83b2b20bbf]
	I0923 04:37:47.837727   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:37:47.848479   20906 logs.go:276] 1 containers: [f14ef81a399a]
	I0923 04:37:47.848560   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:37:47.858636   20906 logs.go:276] 0 containers: []
	W0923 04:37:47.858649   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:37:47.858720   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:37:47.869959   20906 logs.go:276] 1 containers: [8ef37136745e]
	I0923 04:37:47.869975   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:37:47.869981   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:37:47.874615   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:37:47.874622   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:37:47.908527   20906 logs.go:123] Gathering logs for kube-apiserver [a3fb0e58ca56] ...
	I0923 04:37:47.908543   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fb0e58ca56"
	I0923 04:37:47.923136   20906 logs.go:123] Gathering logs for coredns [a08ca05660a6] ...
	I0923 04:37:47.923147   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a08ca05660a6"
	I0923 04:37:47.934832   20906 logs.go:123] Gathering logs for coredns [0c63ff4ad54f] ...
	I0923 04:37:47.934843   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c63ff4ad54f"
	I0923 04:37:47.948207   20906 logs.go:123] Gathering logs for kube-proxy [6b83b2b20bbf] ...
	I0923 04:37:47.948221   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b83b2b20bbf"
	I0923 04:37:47.959583   20906 logs.go:123] Gathering logs for kube-controller-manager [f14ef81a399a] ...
	I0923 04:37:47.959597   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f14ef81a399a"
	I0923 04:37:47.977613   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:37:47.977623   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:37:48.010736   20906 logs.go:123] Gathering logs for coredns [d7e0fb6fd0fc] ...
	I0923 04:37:48.010744   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7e0fb6fd0fc"
	I0923 04:37:48.026408   20906 logs.go:123] Gathering logs for kube-scheduler [dcabd7d718ba] ...
	I0923 04:37:48.026417   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabd7d718ba"
	I0923 04:37:48.047384   20906 logs.go:123] Gathering logs for etcd [0d6e927b9d61] ...
	I0923 04:37:48.047395   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6e927b9d61"
	I0923 04:37:48.063363   20906 logs.go:123] Gathering logs for coredns [91da5fadb655] ...
	I0923 04:37:48.063374   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91da5fadb655"
	I0923 04:37:48.075135   20906 logs.go:123] Gathering logs for storage-provisioner [8ef37136745e] ...
	I0923 04:37:48.075146   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef37136745e"
	I0923 04:37:48.087780   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:37:48.087791   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:37:48.112684   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:37:48.112693   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:37:50.626599   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:37:55.628890   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:37:55.629047   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:37:55.640160   20906 logs.go:276] 1 containers: [a3fb0e58ca56]
	I0923 04:37:55.640250   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:37:55.650566   20906 logs.go:276] 1 containers: [0d6e927b9d61]
	I0923 04:37:55.650651   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:37:55.661175   20906 logs.go:276] 4 containers: [d7e0fb6fd0fc 0c63ff4ad54f a08ca05660a6 91da5fadb655]
	I0923 04:37:55.661252   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:37:55.671509   20906 logs.go:276] 1 containers: [dcabd7d718ba]
	I0923 04:37:55.671595   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:37:55.682238   20906 logs.go:276] 1 containers: [6b83b2b20bbf]
	I0923 04:37:55.682317   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:37:55.692662   20906 logs.go:276] 1 containers: [f14ef81a399a]
	I0923 04:37:55.692751   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:37:55.702713   20906 logs.go:276] 0 containers: []
	W0923 04:37:55.702724   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:37:55.702790   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:37:55.713235   20906 logs.go:276] 1 containers: [8ef37136745e]
	I0923 04:37:55.713250   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:37:55.713255   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:37:55.718414   20906 logs.go:123] Gathering logs for kube-scheduler [dcabd7d718ba] ...
	I0923 04:37:55.718423   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabd7d718ba"
	I0923 04:37:55.733608   20906 logs.go:123] Gathering logs for storage-provisioner [8ef37136745e] ...
	I0923 04:37:55.733619   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef37136745e"
	I0923 04:37:55.745252   20906 logs.go:123] Gathering logs for coredns [91da5fadb655] ...
	I0923 04:37:55.745267   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91da5fadb655"
	I0923 04:37:55.758602   20906 logs.go:123] Gathering logs for kube-proxy [6b83b2b20bbf] ...
	I0923 04:37:55.758617   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b83b2b20bbf"
	I0923 04:37:55.770583   20906 logs.go:123] Gathering logs for kube-controller-manager [f14ef81a399a] ...
	I0923 04:37:55.770595   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f14ef81a399a"
	I0923 04:37:55.788582   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:37:55.788596   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:37:55.821783   20906 logs.go:123] Gathering logs for coredns [d7e0fb6fd0fc] ...
	I0923 04:37:55.821796   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7e0fb6fd0fc"
	I0923 04:37:55.834867   20906 logs.go:123] Gathering logs for coredns [a08ca05660a6] ...
	I0923 04:37:55.834886   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a08ca05660a6"
	I0923 04:37:55.847704   20906 logs.go:123] Gathering logs for coredns [0c63ff4ad54f] ...
	I0923 04:37:55.847720   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c63ff4ad54f"
	I0923 04:37:55.862139   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:37:55.862156   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:37:55.885689   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:37:55.885697   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:37:55.897013   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:37:55.897029   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:37:55.932078   20906 logs.go:123] Gathering logs for kube-apiserver [a3fb0e58ca56] ...
	I0923 04:37:55.932095   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fb0e58ca56"
	I0923 04:37:55.946615   20906 logs.go:123] Gathering logs for etcd [0d6e927b9d61] ...
	I0923 04:37:55.946628   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6e927b9d61"
	I0923 04:37:58.462787   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:38:03.464066   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:38:03.464278   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 04:38:03.482629   20906 logs.go:276] 1 containers: [a3fb0e58ca56]
	I0923 04:38:03.482744   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 04:38:03.498337   20906 logs.go:276] 1 containers: [0d6e927b9d61]
	I0923 04:38:03.498427   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 04:38:03.509923   20906 logs.go:276] 4 containers: [d7e0fb6fd0fc 0c63ff4ad54f a08ca05660a6 91da5fadb655]
	I0923 04:38:03.510013   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 04:38:03.521431   20906 logs.go:276] 1 containers: [dcabd7d718ba]
	I0923 04:38:03.521529   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 04:38:03.534682   20906 logs.go:276] 1 containers: [6b83b2b20bbf]
	I0923 04:38:03.534764   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 04:38:03.545587   20906 logs.go:276] 1 containers: [f14ef81a399a]
	I0923 04:38:03.545671   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 04:38:03.555731   20906 logs.go:276] 0 containers: []
	W0923 04:38:03.555746   20906 logs.go:278] No container was found matching "kindnet"
	I0923 04:38:03.555816   20906 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 04:38:03.566474   20906 logs.go:276] 1 containers: [8ef37136745e]
	I0923 04:38:03.566493   20906 logs.go:123] Gathering logs for kube-apiserver [a3fb0e58ca56] ...
	I0923 04:38:03.566498   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3fb0e58ca56"
	I0923 04:38:03.583659   20906 logs.go:123] Gathering logs for kubelet ...
	I0923 04:38:03.583667   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0923 04:38:03.617648   20906 logs.go:123] Gathering logs for describe nodes ...
	I0923 04:38:03.617658   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 04:38:03.652585   20906 logs.go:123] Gathering logs for coredns [0c63ff4ad54f] ...
	I0923 04:38:03.652595   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c63ff4ad54f"
	I0923 04:38:03.664994   20906 logs.go:123] Gathering logs for kube-controller-manager [f14ef81a399a] ...
	I0923 04:38:03.665005   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f14ef81a399a"
	I0923 04:38:03.682577   20906 logs.go:123] Gathering logs for dmesg ...
	I0923 04:38:03.682592   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 04:38:03.686836   20906 logs.go:123] Gathering logs for etcd [0d6e927b9d61] ...
	I0923 04:38:03.686842   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d6e927b9d61"
	I0923 04:38:03.700901   20906 logs.go:123] Gathering logs for coredns [d7e0fb6fd0fc] ...
	I0923 04:38:03.700911   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7e0fb6fd0fc"
	I0923 04:38:03.712771   20906 logs.go:123] Gathering logs for coredns [a08ca05660a6] ...
	I0923 04:38:03.712784   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a08ca05660a6"
	I0923 04:38:03.726251   20906 logs.go:123] Gathering logs for coredns [91da5fadb655] ...
	I0923 04:38:03.726260   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91da5fadb655"
	I0923 04:38:03.740787   20906 logs.go:123] Gathering logs for kube-scheduler [dcabd7d718ba] ...
	I0923 04:38:03.740797   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcabd7d718ba"
	I0923 04:38:03.756305   20906 logs.go:123] Gathering logs for kube-proxy [6b83b2b20bbf] ...
	I0923 04:38:03.756315   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b83b2b20bbf"
	I0923 04:38:03.768140   20906 logs.go:123] Gathering logs for storage-provisioner [8ef37136745e] ...
	I0923 04:38:03.768150   20906 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef37136745e"
	I0923 04:38:03.780111   20906 logs.go:123] Gathering logs for Docker ...
	I0923 04:38:03.780120   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 04:38:03.804481   20906 logs.go:123] Gathering logs for container status ...
	I0923 04:38:03.804490   20906 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 04:38:06.318554   20906 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0923 04:38:11.320816   20906 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0923 04:38:11.325467   20906 out.go:201] 
	W0923 04:38:11.328466   20906 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0923 04:38:11.328479   20906 out.go:270] * 
	* 
	W0923 04:38:11.329366   20906 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:38:11.340371   20906 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-231000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (585.78s)

                                                
                                    
x
+
TestPause/serial/Start (10.43s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-309000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-309000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (10.3792265s)

                                                
                                                
-- stdout --
	* [pause-309000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-309000" primary control-plane node in "pause-309000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-309000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-309000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-309000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-309000 -n pause-309000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-309000 -n pause-309000: exit status 7 (47.570709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-309000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (10.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-857000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-857000 --driver=qemu2 : exit status 80 (9.9895715s)

                                                
                                                
-- stdout --
	* [NoKubernetes-857000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-857000" primary control-plane node in "NoKubernetes-857000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-857000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-857000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-857000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-857000 -n NoKubernetes-857000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-857000 -n NoKubernetes-857000: exit status 7 (52.732875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-857000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (10.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-857000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-857000 --no-kubernetes --driver=qemu2 : exit status 80 (7.449228833s)

                                                
                                                
-- stdout --
	* [NoKubernetes-857000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-857000
	* Restarting existing qemu2 VM for "NoKubernetes-857000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-857000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-857000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-857000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-857000 -n NoKubernetes-857000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-857000 -n NoKubernetes-857000: exit status 7 (51.454042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-857000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (7.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-857000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-857000 --no-kubernetes --driver=qemu2 : exit status 80 (7.467154542s)

                                                
                                                
-- stdout --
	* [NoKubernetes-857000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-857000
	* Restarting existing qemu2 VM for "NoKubernetes-857000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-857000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-857000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-857000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-857000 -n NoKubernetes-857000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-857000 -n NoKubernetes-857000: exit status 7 (36.583334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-857000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (7.50s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.46s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19690
- KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1626921270/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.46s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.4s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19690
- KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1507852789/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-857000 --driver=qemu2 
I0923 04:39:23.983554   18914 install.go:79] stdout: 
W0923 04:39:23.983654   18914 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2933313152/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2933313152/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I0923 04:39:23.983670   18914 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2933313152/001/docker-machine-driver-hyperkit]
I0923 04:39:23.996103   18914 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2933313152/001/docker-machine-driver-hyperkit]
I0923 04:39:24.009162   18914 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2933313152/001/docker-machine-driver-hyperkit]
I0923 04:39:24.017399   18914 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2933313152/001/docker-machine-driver-hyperkit]
I0923 04:39:24.033135   18914 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0923 04:39:24.033239   18914 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
I0923 04:39:25.814061   18914 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W0923 04:39:25.814091   18914 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W0923 04:39:25.814133   18914 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I0923 04:39:25.814169   18914 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2933313152/002/docker-machine-driver-hyperkit
I0923 04:39:26.234997   18914 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2933313152/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x1085e6d40 0x1085e6d40 0x1085e6d40 0x1085e6d40 0x1085e6d40 0x1085e6d40 0x1085e6d40] Decompressors:map[bz2:0x140006ed660 gz:0x140006ed668 tar:0x140006ed610 tar.bz2:0x140006ed620 tar.gz:0x140006ed630 tar.xz:0x140006ed640 tar.zst:0x140006ed650 tbz2:0x140006ed620 tgz:0x140006ed630 txz:0x140006ed640 tzst:0x140006ed650 xz:0x140006ed670 zip:0x140006ed680 zst:0x140006ed678] Getters:map[file:0x14000065510 http:0x140001acd70 https:0x140001acdc0] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I0923 04:39:26.235110   18914 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2933313152/002/docker-machine-driver-hyperkit
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-857000 --driver=qemu2 : exit status 80 (5.299421959s)

                                                
                                                
-- stdout --
	* [NoKubernetes-857000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-857000
	* Restarting existing qemu2 VM for "NoKubernetes-857000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-857000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-857000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-857000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-857000 -n NoKubernetes-857000
I0923 04:39:29.220646   18914 install.go:79] stdout: 
W0923 04:39:29.220780   18914 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2933313152/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2933313152/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I0923 04:39:29.220809   18914 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2933313152/002/docker-machine-driver-hyperkit]
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-857000 -n NoKubernetes-857000: exit status 7 (67.386209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-857000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (10s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-897000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-897000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (10.000021s)

                                                
                                                
-- stdout --
	* [auto-897000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-897000" primary control-plane node in "auto-897000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-897000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:40:02.041650   21654 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:40:02.041773   21654 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:40:02.041777   21654 out.go:358] Setting ErrFile to fd 2...
	I0923 04:40:02.041779   21654 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:40:02.041910   21654 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:40:02.042903   21654 out.go:352] Setting JSON to false
	I0923 04:40:02.058969   21654 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9573,"bootTime":1727082029,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:40:02.059032   21654 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:40:02.063701   21654 out.go:177] * [auto-897000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:40:02.072426   21654 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 04:40:02.072484   21654 notify.go:220] Checking for updates...
	I0923 04:40:02.079479   21654 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:40:02.082400   21654 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:40:02.085411   21654 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:40:02.086795   21654 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	I0923 04:40:02.090433   21654 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 04:40:02.093757   21654 config.go:182] Loaded profile config "cert-expiration-265000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:40:02.093825   21654 config.go:182] Loaded profile config "multinode-090000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:40:02.093880   21654 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:40:02.098254   21654 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 04:40:02.105426   21654 start.go:297] selected driver: qemu2
	I0923 04:40:02.105432   21654 start.go:901] validating driver "qemu2" against <nil>
	I0923 04:40:02.105440   21654 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 04:40:02.107863   21654 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 04:40:02.110390   21654 out.go:177] * Automatically selected the socket_vmnet network
	I0923 04:40:02.113504   21654 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 04:40:02.113538   21654 cni.go:84] Creating CNI manager for ""
	I0923 04:40:02.113561   21654 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 04:40:02.113566   21654 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 04:40:02.113592   21654 start.go:340] cluster config:
	{Name:auto-897000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-897000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:40:02.117366   21654 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:40:02.125400   21654 out.go:177] * Starting "auto-897000" primary control-plane node in "auto-897000" cluster
	I0923 04:40:02.129404   21654 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 04:40:02.129418   21654 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 04:40:02.129425   21654 cache.go:56] Caching tarball of preloaded images
	I0923 04:40:02.129497   21654 preload.go:172] Found /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 04:40:02.129503   21654 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 04:40:02.129573   21654 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/auto-897000/config.json ...
	I0923 04:40:02.129589   21654 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/auto-897000/config.json: {Name:mka546eee3eb89b5983462ff018caaa0872a1c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 04:40:02.129838   21654 start.go:360] acquireMachinesLock for auto-897000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:40:02.129874   21654 start.go:364] duration metric: took 29.917µs to acquireMachinesLock for "auto-897000"
	I0923 04:40:02.129887   21654 start.go:93] Provisioning new machine with config: &{Name:auto-897000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-897000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:40:02.129915   21654 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:40:02.137406   21654 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 04:40:02.155339   21654 start.go:159] libmachine.API.Create for "auto-897000" (driver="qemu2")
	I0923 04:40:02.155381   21654 client.go:168] LocalClient.Create starting
	I0923 04:40:02.155449   21654 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:40:02.155482   21654 main.go:141] libmachine: Decoding PEM data...
	I0923 04:40:02.155494   21654 main.go:141] libmachine: Parsing certificate...
	I0923 04:40:02.155531   21654 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:40:02.155561   21654 main.go:141] libmachine: Decoding PEM data...
	I0923 04:40:02.155569   21654 main.go:141] libmachine: Parsing certificate...
	I0923 04:40:02.155991   21654 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:40:02.320669   21654 main.go:141] libmachine: Creating SSH key...
	I0923 04:40:02.456381   21654 main.go:141] libmachine: Creating Disk image...
	I0923 04:40:02.456387   21654 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:40:02.456624   21654 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/auto-897000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/auto-897000/disk.qcow2
	I0923 04:40:02.466198   21654 main.go:141] libmachine: STDOUT: 
	I0923 04:40:02.466220   21654 main.go:141] libmachine: STDERR: 
	I0923 04:40:02.466275   21654 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/auto-897000/disk.qcow2 +20000M
	I0923 04:40:02.474099   21654 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:40:02.474111   21654 main.go:141] libmachine: STDERR: 
	I0923 04:40:02.474122   21654 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/auto-897000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/auto-897000/disk.qcow2
	I0923 04:40:02.474128   21654 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:40:02.474139   21654 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:40:02.474163   21654 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/auto-897000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/auto-897000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/auto-897000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:b3:3b:16:98:82 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/auto-897000/disk.qcow2
	I0923 04:40:02.475817   21654 main.go:141] libmachine: STDOUT: 
	I0923 04:40:02.475830   21654 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:40:02.475855   21654 client.go:171] duration metric: took 320.470417ms to LocalClient.Create
	I0923 04:40:04.478029   21654 start.go:128] duration metric: took 2.348103292s to createHost
	I0923 04:40:04.478084   21654 start.go:83] releasing machines lock for "auto-897000", held for 2.348212s
	W0923 04:40:04.478207   21654 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:40:04.493280   21654 out.go:177] * Deleting "auto-897000" in qemu2 ...
	W0923 04:40:04.527582   21654 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:40:04.527599   21654 start.go:729] Will try again in 5 seconds ...
	I0923 04:40:09.529851   21654 start.go:360] acquireMachinesLock for auto-897000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:40:09.530290   21654 start.go:364] duration metric: took 334.917µs to acquireMachinesLock for "auto-897000"
	I0923 04:40:09.530393   21654 start.go:93] Provisioning new machine with config: &{Name:auto-897000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-897000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:40:09.530669   21654 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:40:09.536315   21654 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 04:40:09.586097   21654 start.go:159] libmachine.API.Create for "auto-897000" (driver="qemu2")
	I0923 04:40:09.586138   21654 client.go:168] LocalClient.Create starting
	I0923 04:40:09.586253   21654 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:40:09.586323   21654 main.go:141] libmachine: Decoding PEM data...
	I0923 04:40:09.586339   21654 main.go:141] libmachine: Parsing certificate...
	I0923 04:40:09.586402   21654 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:40:09.586447   21654 main.go:141] libmachine: Decoding PEM data...
	I0923 04:40:09.586463   21654 main.go:141] libmachine: Parsing certificate...
	I0923 04:40:09.587299   21654 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:40:09.769967   21654 main.go:141] libmachine: Creating SSH key...
	I0923 04:40:09.941583   21654 main.go:141] libmachine: Creating Disk image...
	I0923 04:40:09.941589   21654 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:40:09.941839   21654 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/auto-897000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/auto-897000/disk.qcow2
	I0923 04:40:09.951580   21654 main.go:141] libmachine: STDOUT: 
	I0923 04:40:09.951601   21654 main.go:141] libmachine: STDERR: 
	I0923 04:40:09.951667   21654 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/auto-897000/disk.qcow2 +20000M
	I0923 04:40:09.959639   21654 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:40:09.959658   21654 main.go:141] libmachine: STDERR: 
	I0923 04:40:09.959671   21654 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/auto-897000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/auto-897000/disk.qcow2
	I0923 04:40:09.959676   21654 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:40:09.959684   21654 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:40:09.959709   21654 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/auto-897000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/auto-897000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/auto-897000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:d2:a2:10:1e:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/auto-897000/disk.qcow2
	I0923 04:40:09.961407   21654 main.go:141] libmachine: STDOUT: 
	I0923 04:40:09.961420   21654 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:40:09.961432   21654 client.go:171] duration metric: took 375.288708ms to LocalClient.Create
	I0923 04:40:11.963589   21654 start.go:128] duration metric: took 2.432897416s to createHost
	I0923 04:40:11.963648   21654 start.go:83] releasing machines lock for "auto-897000", held for 2.433346583s
	W0923 04:40:11.964042   21654 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-897000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-897000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:40:11.978633   21654 out.go:201] 
	W0923 04:40:11.982855   21654 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:40:11.982901   21654 out.go:270] * 
	* 
	W0923 04:40:11.985685   21654 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:40:11.998752   21654 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (10.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-897000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-897000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.887246125s)

                                                
                                                
-- stdout --
	* [kindnet-897000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-897000" primary control-plane node in "kindnet-897000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-897000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:40:14.188190   21766 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:40:14.188335   21766 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:40:14.188338   21766 out.go:358] Setting ErrFile to fd 2...
	I0923 04:40:14.188340   21766 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:40:14.188473   21766 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:40:14.189563   21766 out.go:352] Setting JSON to false
	I0923 04:40:14.205745   21766 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9585,"bootTime":1727082029,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:40:14.205845   21766 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:40:14.211327   21766 out.go:177] * [kindnet-897000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:40:14.219248   21766 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 04:40:14.219289   21766 notify.go:220] Checking for updates...
	I0923 04:40:14.226124   21766 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:40:14.229193   21766 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:40:14.233071   21766 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:40:14.236151   21766 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	I0923 04:40:14.239201   21766 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 04:40:14.242423   21766 config.go:182] Loaded profile config "cert-expiration-265000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:40:14.242491   21766 config.go:182] Loaded profile config "multinode-090000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:40:14.242545   21766 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:40:14.247188   21766 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 04:40:14.254098   21766 start.go:297] selected driver: qemu2
	I0923 04:40:14.254104   21766 start.go:901] validating driver "qemu2" against <nil>
	I0923 04:40:14.254110   21766 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 04:40:14.256463   21766 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 04:40:14.259114   21766 out.go:177] * Automatically selected the socket_vmnet network
	I0923 04:40:14.262191   21766 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 04:40:14.262209   21766 cni.go:84] Creating CNI manager for "kindnet"
	I0923 04:40:14.262212   21766 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 04:40:14.262246   21766 start.go:340] cluster config:
	{Name:kindnet-897000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-897000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:40:14.266000   21766 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:40:14.274157   21766 out.go:177] * Starting "kindnet-897000" primary control-plane node in "kindnet-897000" cluster
	I0923 04:40:14.277978   21766 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 04:40:14.277997   21766 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 04:40:14.278006   21766 cache.go:56] Caching tarball of preloaded images
	I0923 04:40:14.278123   21766 preload.go:172] Found /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 04:40:14.278129   21766 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 04:40:14.278203   21766 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/kindnet-897000/config.json ...
	I0923 04:40:14.278215   21766 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/kindnet-897000/config.json: {Name:mk986aae47641f7e9d0f5429f65422bab73b21a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 04:40:14.278438   21766 start.go:360] acquireMachinesLock for kindnet-897000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:40:14.278475   21766 start.go:364] duration metric: took 30.458µs to acquireMachinesLock for "kindnet-897000"
	I0923 04:40:14.278491   21766 start.go:93] Provisioning new machine with config: &{Name:kindnet-897000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-897000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:40:14.278527   21766 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:40:14.286159   21766 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 04:40:14.304858   21766 start.go:159] libmachine.API.Create for "kindnet-897000" (driver="qemu2")
	I0923 04:40:14.304884   21766 client.go:168] LocalClient.Create starting
	I0923 04:40:14.304958   21766 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:40:14.304990   21766 main.go:141] libmachine: Decoding PEM data...
	I0923 04:40:14.305000   21766 main.go:141] libmachine: Parsing certificate...
	I0923 04:40:14.305037   21766 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:40:14.305062   21766 main.go:141] libmachine: Decoding PEM data...
	I0923 04:40:14.305072   21766 main.go:141] libmachine: Parsing certificate...
	I0923 04:40:14.305433   21766 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:40:14.472456   21766 main.go:141] libmachine: Creating SSH key...
	I0923 04:40:14.554956   21766 main.go:141] libmachine: Creating Disk image...
	I0923 04:40:14.554962   21766 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:40:14.555202   21766 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kindnet-897000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kindnet-897000/disk.qcow2
	I0923 04:40:14.564398   21766 main.go:141] libmachine: STDOUT: 
	I0923 04:40:14.564418   21766 main.go:141] libmachine: STDERR: 
	I0923 04:40:14.564485   21766 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kindnet-897000/disk.qcow2 +20000M
	I0923 04:40:14.572379   21766 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:40:14.572393   21766 main.go:141] libmachine: STDERR: 
	I0923 04:40:14.572411   21766 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kindnet-897000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kindnet-897000/disk.qcow2
	I0923 04:40:14.572416   21766 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:40:14.572426   21766 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:40:14.572453   21766 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kindnet-897000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kindnet-897000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kindnet-897000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:67:20:8f:bb:b7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kindnet-897000/disk.qcow2
	I0923 04:40:14.574102   21766 main.go:141] libmachine: STDOUT: 
	I0923 04:40:14.574116   21766 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:40:14.574137   21766 client.go:171] duration metric: took 269.244709ms to LocalClient.Create
	I0923 04:40:16.576312   21766 start.go:128] duration metric: took 2.297773417s to createHost
	I0923 04:40:16.576365   21766 start.go:83] releasing machines lock for "kindnet-897000", held for 2.297890916s
	W0923 04:40:16.576441   21766 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:40:16.590637   21766 out.go:177] * Deleting "kindnet-897000" in qemu2 ...
	W0923 04:40:16.621132   21766 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:40:16.621154   21766 start.go:729] Will try again in 5 seconds ...
	I0923 04:40:21.623384   21766 start.go:360] acquireMachinesLock for kindnet-897000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:40:21.623863   21766 start.go:364] duration metric: took 377.458µs to acquireMachinesLock for "kindnet-897000"
	I0923 04:40:21.623995   21766 start.go:93] Provisioning new machine with config: &{Name:kindnet-897000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-897000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:40:21.624304   21766 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:40:21.641202   21766 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 04:40:21.690634   21766 start.go:159] libmachine.API.Create for "kindnet-897000" (driver="qemu2")
	I0923 04:40:21.690690   21766 client.go:168] LocalClient.Create starting
	I0923 04:40:21.690804   21766 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:40:21.690866   21766 main.go:141] libmachine: Decoding PEM data...
	I0923 04:40:21.690888   21766 main.go:141] libmachine: Parsing certificate...
	I0923 04:40:21.690950   21766 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:40:21.690995   21766 main.go:141] libmachine: Decoding PEM data...
	I0923 04:40:21.691007   21766 main.go:141] libmachine: Parsing certificate...
	I0923 04:40:21.691744   21766 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:40:21.868085   21766 main.go:141] libmachine: Creating SSH key...
	I0923 04:40:21.980283   21766 main.go:141] libmachine: Creating Disk image...
	I0923 04:40:21.980292   21766 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:40:21.980520   21766 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kindnet-897000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kindnet-897000/disk.qcow2
	I0923 04:40:21.989995   21766 main.go:141] libmachine: STDOUT: 
	I0923 04:40:21.990011   21766 main.go:141] libmachine: STDERR: 
	I0923 04:40:21.990065   21766 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kindnet-897000/disk.qcow2 +20000M
	I0923 04:40:21.997849   21766 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:40:21.997863   21766 main.go:141] libmachine: STDERR: 
	I0923 04:40:21.997874   21766 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kindnet-897000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kindnet-897000/disk.qcow2
	I0923 04:40:21.997879   21766 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:40:21.997884   21766 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:40:21.997922   21766 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kindnet-897000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kindnet-897000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kindnet-897000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:9c:96:3c:a9:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kindnet-897000/disk.qcow2
	I0923 04:40:21.999612   21766 main.go:141] libmachine: STDOUT: 
	I0923 04:40:21.999626   21766 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:40:21.999639   21766 client.go:171] duration metric: took 308.943625ms to LocalClient.Create
	I0923 04:40:24.001807   21766 start.go:128] duration metric: took 2.377485833s to createHost
	I0923 04:40:24.001922   21766 start.go:83] releasing machines lock for "kindnet-897000", held for 2.378045s
	W0923 04:40:24.002228   21766 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-897000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-897000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:40:24.016772   21766 out.go:201] 
	W0923 04:40:24.021044   21766 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:40:24.021103   21766 out.go:270] * 
	* 
	W0923 04:40:24.023434   21766 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:40:24.032800   21766 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-897000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-897000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.777621584s)

                                                
                                                
-- stdout --
	* [flannel-897000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-897000" primary control-plane node in "flannel-897000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-897000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:40:26.339805   21879 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:40:26.339932   21879 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:40:26.339935   21879 out.go:358] Setting ErrFile to fd 2...
	I0923 04:40:26.339939   21879 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:40:26.340069   21879 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:40:26.341148   21879 out.go:352] Setting JSON to false
	I0923 04:40:26.357211   21879 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9597,"bootTime":1727082029,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:40:26.357282   21879 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:40:26.362647   21879 out.go:177] * [flannel-897000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:40:26.370853   21879 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 04:40:26.370926   21879 notify.go:220] Checking for updates...
	I0923 04:40:26.379748   21879 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:40:26.382857   21879 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:40:26.385909   21879 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:40:26.388850   21879 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	I0923 04:40:26.391886   21879 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 04:40:26.395139   21879 config.go:182] Loaded profile config "cert-expiration-265000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:40:26.395216   21879 config.go:182] Loaded profile config "multinode-090000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:40:26.395266   21879 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:40:26.399804   21879 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 04:40:26.406735   21879 start.go:297] selected driver: qemu2
	I0923 04:40:26.406740   21879 start.go:901] validating driver "qemu2" against <nil>
	I0923 04:40:26.406746   21879 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 04:40:26.409300   21879 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 04:40:26.412861   21879 out.go:177] * Automatically selected the socket_vmnet network
	I0923 04:40:26.415853   21879 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 04:40:26.415869   21879 cni.go:84] Creating CNI manager for "flannel"
	I0923 04:40:26.415872   21879 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0923 04:40:26.415906   21879 start.go:340] cluster config:
	{Name:flannel-897000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:flannel-897000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:40:26.419837   21879 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:40:26.428762   21879 out.go:177] * Starting "flannel-897000" primary control-plane node in "flannel-897000" cluster
	I0923 04:40:26.432671   21879 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 04:40:26.432688   21879 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 04:40:26.432698   21879 cache.go:56] Caching tarball of preloaded images
	I0923 04:40:26.432789   21879 preload.go:172] Found /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 04:40:26.432796   21879 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 04:40:26.432860   21879 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/flannel-897000/config.json ...
	I0923 04:40:26.432871   21879 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/flannel-897000/config.json: {Name:mke2a1c7dfb71f4b1b9c68757d7089023351b5f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 04:40:26.433096   21879 start.go:360] acquireMachinesLock for flannel-897000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:40:26.433133   21879 start.go:364] duration metric: took 30.041µs to acquireMachinesLock for "flannel-897000"
	I0923 04:40:26.433146   21879 start.go:93] Provisioning new machine with config: &{Name:flannel-897000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-897000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:40:26.433172   21879 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:40:26.436839   21879 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 04:40:26.455171   21879 start.go:159] libmachine.API.Create for "flannel-897000" (driver="qemu2")
	I0923 04:40:26.455204   21879 client.go:168] LocalClient.Create starting
	I0923 04:40:26.455281   21879 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:40:26.455314   21879 main.go:141] libmachine: Decoding PEM data...
	I0923 04:40:26.455324   21879 main.go:141] libmachine: Parsing certificate...
	I0923 04:40:26.455360   21879 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:40:26.455384   21879 main.go:141] libmachine: Decoding PEM data...
	I0923 04:40:26.455392   21879 main.go:141] libmachine: Parsing certificate...
	I0923 04:40:26.455818   21879 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:40:26.622061   21879 main.go:141] libmachine: Creating SSH key...
	I0923 04:40:26.675367   21879 main.go:141] libmachine: Creating Disk image...
	I0923 04:40:26.675373   21879 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:40:26.675602   21879 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/flannel-897000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/flannel-897000/disk.qcow2
	I0923 04:40:26.684703   21879 main.go:141] libmachine: STDOUT: 
	I0923 04:40:26.684719   21879 main.go:141] libmachine: STDERR: 
	I0923 04:40:26.684775   21879 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/flannel-897000/disk.qcow2 +20000M
	I0923 04:40:26.692711   21879 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:40:26.692725   21879 main.go:141] libmachine: STDERR: 
	I0923 04:40:26.692737   21879 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/flannel-897000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/flannel-897000/disk.qcow2
	I0923 04:40:26.692742   21879 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:40:26.692753   21879 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:40:26.692782   21879 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/flannel-897000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/flannel-897000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/flannel-897000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:03:d0:9c:4e:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/flannel-897000/disk.qcow2
	I0923 04:40:26.694468   21879 main.go:141] libmachine: STDOUT: 
	I0923 04:40:26.694479   21879 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:40:26.694497   21879 client.go:171] duration metric: took 239.287042ms to LocalClient.Create
	I0923 04:40:28.696663   21879 start.go:128] duration metric: took 2.263479708s to createHost
	I0923 04:40:28.696726   21879 start.go:83] releasing machines lock for "flannel-897000", held for 2.263584208s
	W0923 04:40:28.696831   21879 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:40:28.708182   21879 out.go:177] * Deleting "flannel-897000" in qemu2 ...
	W0923 04:40:28.739317   21879 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:40:28.739342   21879 start.go:729] Will try again in 5 seconds ...
	I0923 04:40:33.741528   21879 start.go:360] acquireMachinesLock for flannel-897000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:40:33.741984   21879 start.go:364] duration metric: took 354.959µs to acquireMachinesLock for "flannel-897000"
	I0923 04:40:33.742109   21879 start.go:93] Provisioning new machine with config: &{Name:flannel-897000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-897000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:40:33.742409   21879 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:40:33.754102   21879 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 04:40:33.805756   21879 start.go:159] libmachine.API.Create for "flannel-897000" (driver="qemu2")
	I0923 04:40:33.805800   21879 client.go:168] LocalClient.Create starting
	I0923 04:40:33.805941   21879 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:40:33.805998   21879 main.go:141] libmachine: Decoding PEM data...
	I0923 04:40:33.806022   21879 main.go:141] libmachine: Parsing certificate...
	I0923 04:40:33.806091   21879 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:40:33.806138   21879 main.go:141] libmachine: Decoding PEM data...
	I0923 04:40:33.806150   21879 main.go:141] libmachine: Parsing certificate...
	I0923 04:40:33.806768   21879 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:40:33.982267   21879 main.go:141] libmachine: Creating SSH key...
	I0923 04:40:34.018987   21879 main.go:141] libmachine: Creating Disk image...
	I0923 04:40:34.018992   21879 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:40:34.019220   21879 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/flannel-897000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/flannel-897000/disk.qcow2
	I0923 04:40:34.028401   21879 main.go:141] libmachine: STDOUT: 
	I0923 04:40:34.028418   21879 main.go:141] libmachine: STDERR: 
	I0923 04:40:34.028478   21879 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/flannel-897000/disk.qcow2 +20000M
	I0923 04:40:34.036273   21879 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:40:34.036288   21879 main.go:141] libmachine: STDERR: 
	I0923 04:40:34.036306   21879 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/flannel-897000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/flannel-897000/disk.qcow2
	I0923 04:40:34.036311   21879 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:40:34.036319   21879 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:40:34.036349   21879 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/flannel-897000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/flannel-897000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/flannel-897000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:e8:86:47:c0:ea -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/flannel-897000/disk.qcow2
	I0923 04:40:34.037896   21879 main.go:141] libmachine: STDOUT: 
	I0923 04:40:34.037909   21879 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:40:34.037922   21879 client.go:171] duration metric: took 232.117833ms to LocalClient.Create
	I0923 04:40:36.040082   21879 start.go:128] duration metric: took 2.297651375s to createHost
	I0923 04:40:36.040146   21879 start.go:83] releasing machines lock for "flannel-897000", held for 2.298147125s
	W0923 04:40:36.040606   21879 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-897000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-897000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:40:36.057242   21879 out.go:201] 
	W0923 04:40:36.062520   21879 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:40:36.062560   21879 out.go:270] * 
	* 
	W0923 04:40:36.065421   21879 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:40:36.074265   21879 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-897000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-897000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.847923541s)

                                                
                                                
-- stdout --
	* [enable-default-cni-897000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-897000" primary control-plane node in "enable-default-cni-897000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-897000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:40:38.504943   21996 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:40:38.505087   21996 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:40:38.505093   21996 out.go:358] Setting ErrFile to fd 2...
	I0923 04:40:38.505095   21996 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:40:38.505202   21996 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:40:38.506238   21996 out.go:352] Setting JSON to false
	I0923 04:40:38.522435   21996 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9609,"bootTime":1727082029,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:40:38.522492   21996 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:40:38.528150   21996 out.go:177] * [enable-default-cni-897000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:40:38.535958   21996 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 04:40:38.536022   21996 notify.go:220] Checking for updates...
	I0923 04:40:38.543978   21996 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:40:38.546961   21996 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:40:38.550006   21996 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:40:38.553002   21996 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	I0923 04:40:38.555944   21996 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 04:40:38.559347   21996 config.go:182] Loaded profile config "cert-expiration-265000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:40:38.559416   21996 config.go:182] Loaded profile config "multinode-090000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:40:38.559481   21996 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:40:38.562849   21996 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 04:40:38.570000   21996 start.go:297] selected driver: qemu2
	I0923 04:40:38.570007   21996 start.go:901] validating driver "qemu2" against <nil>
	I0923 04:40:38.570017   21996 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 04:40:38.572330   21996 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 04:40:38.574937   21996 out.go:177] * Automatically selected the socket_vmnet network
	E0923 04:40:38.578023   21996 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0923 04:40:38.578036   21996 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 04:40:38.578058   21996 cni.go:84] Creating CNI manager for "bridge"
	I0923 04:40:38.578064   21996 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 04:40:38.578096   21996 start.go:340] cluster config:
	{Name:enable-default-cni-897000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-897000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:40:38.581814   21996 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:40:38.587916   21996 out.go:177] * Starting "enable-default-cni-897000" primary control-plane node in "enable-default-cni-897000" cluster
	I0923 04:40:38.591987   21996 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 04:40:38.592005   21996 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 04:40:38.592014   21996 cache.go:56] Caching tarball of preloaded images
	I0923 04:40:38.592106   21996 preload.go:172] Found /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 04:40:38.592112   21996 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 04:40:38.592174   21996 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/enable-default-cni-897000/config.json ...
	I0923 04:40:38.592185   21996 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/enable-default-cni-897000/config.json: {Name:mkc70877d258149fdd933df5df1b0cfceebbe9fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 04:40:38.592532   21996 start.go:360] acquireMachinesLock for enable-default-cni-897000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:40:38.592570   21996 start.go:364] duration metric: took 30.333µs to acquireMachinesLock for "enable-default-cni-897000"
	I0923 04:40:38.592586   21996 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-897000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-897000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:40:38.592613   21996 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:40:38.599945   21996 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 04:40:38.617915   21996 start.go:159] libmachine.API.Create for "enable-default-cni-897000" (driver="qemu2")
	I0923 04:40:38.617947   21996 client.go:168] LocalClient.Create starting
	I0923 04:40:38.618037   21996 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:40:38.618069   21996 main.go:141] libmachine: Decoding PEM data...
	I0923 04:40:38.618079   21996 main.go:141] libmachine: Parsing certificate...
	I0923 04:40:38.618120   21996 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:40:38.618144   21996 main.go:141] libmachine: Decoding PEM data...
	I0923 04:40:38.618153   21996 main.go:141] libmachine: Parsing certificate...
	I0923 04:40:38.618584   21996 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:40:38.783923   21996 main.go:141] libmachine: Creating SSH key...
	I0923 04:40:38.871952   21996 main.go:141] libmachine: Creating Disk image...
	I0923 04:40:38.871959   21996 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:40:38.872181   21996 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/enable-default-cni-897000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/enable-default-cni-897000/disk.qcow2
	I0923 04:40:38.881437   21996 main.go:141] libmachine: STDOUT: 
	I0923 04:40:38.881452   21996 main.go:141] libmachine: STDERR: 
	I0923 04:40:38.881522   21996 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/enable-default-cni-897000/disk.qcow2 +20000M
	I0923 04:40:38.889353   21996 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:40:38.889370   21996 main.go:141] libmachine: STDERR: 
	I0923 04:40:38.889383   21996 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/enable-default-cni-897000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/enable-default-cni-897000/disk.qcow2
	I0923 04:40:38.889388   21996 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:40:38.889400   21996 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:40:38.889427   21996 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/enable-default-cni-897000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/enable-default-cni-897000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/enable-default-cni-897000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:17:5a:35:f3:35 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/enable-default-cni-897000/disk.qcow2
	I0923 04:40:38.891090   21996 main.go:141] libmachine: STDOUT: 
	I0923 04:40:38.891109   21996 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:40:38.891129   21996 client.go:171] duration metric: took 273.177125ms to LocalClient.Create
	I0923 04:40:40.893288   21996 start.go:128] duration metric: took 2.30066475s to createHost
	I0923 04:40:40.893333   21996 start.go:83] releasing machines lock for "enable-default-cni-897000", held for 2.300763541s
	W0923 04:40:40.893389   21996 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:40:40.904878   21996 out.go:177] * Deleting "enable-default-cni-897000" in qemu2 ...
	W0923 04:40:40.933924   21996 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:40:40.933944   21996 start.go:729] Will try again in 5 seconds ...
	I0923 04:40:45.936138   21996 start.go:360] acquireMachinesLock for enable-default-cni-897000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:40:45.936569   21996 start.go:364] duration metric: took 339.667µs to acquireMachinesLock for "enable-default-cni-897000"
	I0923 04:40:45.936674   21996 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-897000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-897000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:40:45.936959   21996 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:40:45.941555   21996 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 04:40:45.992300   21996 start.go:159] libmachine.API.Create for "enable-default-cni-897000" (driver="qemu2")
	I0923 04:40:45.992343   21996 client.go:168] LocalClient.Create starting
	I0923 04:40:45.992454   21996 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:40:45.992517   21996 main.go:141] libmachine: Decoding PEM data...
	I0923 04:40:45.992533   21996 main.go:141] libmachine: Parsing certificate...
	I0923 04:40:45.992602   21996 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:40:45.992647   21996 main.go:141] libmachine: Decoding PEM data...
	I0923 04:40:45.992662   21996 main.go:141] libmachine: Parsing certificate...
	I0923 04:40:45.993177   21996 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:40:46.177005   21996 main.go:141] libmachine: Creating SSH key...
	I0923 04:40:46.259119   21996 main.go:141] libmachine: Creating Disk image...
	I0923 04:40:46.259125   21996 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:40:46.259366   21996 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/enable-default-cni-897000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/enable-default-cni-897000/disk.qcow2
	I0923 04:40:46.268557   21996 main.go:141] libmachine: STDOUT: 
	I0923 04:40:46.268576   21996 main.go:141] libmachine: STDERR: 
	I0923 04:40:46.268645   21996 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/enable-default-cni-897000/disk.qcow2 +20000M
	I0923 04:40:46.276539   21996 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:40:46.276556   21996 main.go:141] libmachine: STDERR: 
	I0923 04:40:46.276583   21996 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/enable-default-cni-897000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/enable-default-cni-897000/disk.qcow2
	I0923 04:40:46.276587   21996 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:40:46.276600   21996 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:40:46.276626   21996 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/enable-default-cni-897000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/enable-default-cni-897000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/enable-default-cni-897000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:77:0d:e4:12:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/enable-default-cni-897000/disk.qcow2
	I0923 04:40:46.278317   21996 main.go:141] libmachine: STDOUT: 
	I0923 04:40:46.278331   21996 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:40:46.278345   21996 client.go:171] duration metric: took 285.996458ms to LocalClient.Create
	I0923 04:40:48.280512   21996 start.go:128] duration metric: took 2.343511917s to createHost
	I0923 04:40:48.280562   21996 start.go:83] releasing machines lock for "enable-default-cni-897000", held for 2.343980833s
	W0923 04:40:48.280988   21996 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-897000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-897000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:40:48.294653   21996 out.go:201] 
	W0923 04:40:48.298791   21996 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:40:48.298818   21996 out.go:270] * 
	* 
	W0923 04:40:48.301532   21996 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:40:48.310705   21996 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (10.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-897000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-897000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (10.050987584s)

                                                
                                                
-- stdout --
	* [bridge-897000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-897000" primary control-plane node in "bridge-897000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-897000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:40:50.546658   22105 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:40:50.546794   22105 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:40:50.546797   22105 out.go:358] Setting ErrFile to fd 2...
	I0923 04:40:50.546799   22105 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:40:50.546958   22105 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:40:50.547993   22105 out.go:352] Setting JSON to false
	I0923 04:40:50.564081   22105 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9621,"bootTime":1727082029,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:40:50.564151   22105 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:40:50.570144   22105 out.go:177] * [bridge-897000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:40:50.578035   22105 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 04:40:50.578086   22105 notify.go:220] Checking for updates...
	I0923 04:40:50.584004   22105 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:40:50.586983   22105 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:40:50.590987   22105 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:40:50.594041   22105 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	I0923 04:40:50.597155   22105 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 04:40:50.600441   22105 config.go:182] Loaded profile config "cert-expiration-265000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:40:50.600509   22105 config.go:182] Loaded profile config "multinode-090000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:40:50.600568   22105 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:40:50.604979   22105 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 04:40:50.611931   22105 start.go:297] selected driver: qemu2
	I0923 04:40:50.611938   22105 start.go:901] validating driver "qemu2" against <nil>
	I0923 04:40:50.611945   22105 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 04:40:50.614294   22105 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 04:40:50.618028   22105 out.go:177] * Automatically selected the socket_vmnet network
	I0923 04:40:50.621062   22105 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 04:40:50.621080   22105 cni.go:84] Creating CNI manager for "bridge"
	I0923 04:40:50.621087   22105 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 04:40:50.621130   22105 start.go:340] cluster config:
	{Name:bridge-897000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:bridge-897000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:40:50.624881   22105 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:40:50.633025   22105 out.go:177] * Starting "bridge-897000" primary control-plane node in "bridge-897000" cluster
	I0923 04:40:50.636982   22105 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 04:40:50.636998   22105 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 04:40:50.637008   22105 cache.go:56] Caching tarball of preloaded images
	I0923 04:40:50.637078   22105 preload.go:172] Found /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 04:40:50.637084   22105 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 04:40:50.637152   22105 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/bridge-897000/config.json ...
	I0923 04:40:50.637170   22105 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/bridge-897000/config.json: {Name:mk2ad54a43bdff64ac0a344241d48b895fab3039 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 04:40:50.637394   22105 start.go:360] acquireMachinesLock for bridge-897000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:40:50.637430   22105 start.go:364] duration metric: took 29.833µs to acquireMachinesLock for "bridge-897000"
	I0923 04:40:50.637444   22105 start.go:93] Provisioning new machine with config: &{Name:bridge-897000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-897000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:40:50.637469   22105 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:40:50.643897   22105 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 04:40:50.662504   22105 start.go:159] libmachine.API.Create for "bridge-897000" (driver="qemu2")
	I0923 04:40:50.662534   22105 client.go:168] LocalClient.Create starting
	I0923 04:40:50.662603   22105 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:40:50.662642   22105 main.go:141] libmachine: Decoding PEM data...
	I0923 04:40:50.662657   22105 main.go:141] libmachine: Parsing certificate...
	I0923 04:40:50.662694   22105 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:40:50.662719   22105 main.go:141] libmachine: Decoding PEM data...
	I0923 04:40:50.662730   22105 main.go:141] libmachine: Parsing certificate...
	I0923 04:40:50.663139   22105 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:40:50.827598   22105 main.go:141] libmachine: Creating SSH key...
	I0923 04:40:51.087554   22105 main.go:141] libmachine: Creating Disk image...
	I0923 04:40:51.087564   22105 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:40:51.087858   22105 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/bridge-897000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/bridge-897000/disk.qcow2
	I0923 04:40:51.097744   22105 main.go:141] libmachine: STDOUT: 
	I0923 04:40:51.097759   22105 main.go:141] libmachine: STDERR: 
	I0923 04:40:51.097826   22105 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/bridge-897000/disk.qcow2 +20000M
	I0923 04:40:51.105731   22105 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:40:51.105745   22105 main.go:141] libmachine: STDERR: 
	I0923 04:40:51.105767   22105 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/bridge-897000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/bridge-897000/disk.qcow2
	I0923 04:40:51.105773   22105 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:40:51.105785   22105 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:40:51.105813   22105 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/bridge-897000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/bridge-897000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/bridge-897000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:94:e3:d8:ad:14 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/bridge-897000/disk.qcow2
	I0923 04:40:51.107401   22105 main.go:141] libmachine: STDOUT: 
	I0923 04:40:51.107413   22105 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:40:51.107430   22105 client.go:171] duration metric: took 444.891917ms to LocalClient.Create
	I0923 04:40:53.109591   22105 start.go:128] duration metric: took 2.47211575s to createHost
	I0923 04:40:53.109660   22105 start.go:83] releasing machines lock for "bridge-897000", held for 2.47223175s
	W0923 04:40:53.109740   22105 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:40:53.124953   22105 out.go:177] * Deleting "bridge-897000" in qemu2 ...
	W0923 04:40:53.155494   22105 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:40:53.155512   22105 start.go:729] Will try again in 5 seconds ...
	I0923 04:40:58.157753   22105 start.go:360] acquireMachinesLock for bridge-897000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:40:58.158239   22105 start.go:364] duration metric: took 371.083µs to acquireMachinesLock for "bridge-897000"
	I0923 04:40:58.158357   22105 start.go:93] Provisioning new machine with config: &{Name:bridge-897000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-897000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:40:58.158609   22105 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:40:58.175082   22105 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 04:40:58.226742   22105 start.go:159] libmachine.API.Create for "bridge-897000" (driver="qemu2")
	I0923 04:40:58.226801   22105 client.go:168] LocalClient.Create starting
	I0923 04:40:58.226934   22105 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:40:58.227004   22105 main.go:141] libmachine: Decoding PEM data...
	I0923 04:40:58.227023   22105 main.go:141] libmachine: Parsing certificate...
	I0923 04:40:58.227085   22105 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:40:58.227131   22105 main.go:141] libmachine: Decoding PEM data...
	I0923 04:40:58.227150   22105 main.go:141] libmachine: Parsing certificate...
	I0923 04:40:58.227709   22105 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:40:58.401016   22105 main.go:141] libmachine: Creating SSH key...
	I0923 04:40:58.500919   22105 main.go:141] libmachine: Creating Disk image...
	I0923 04:40:58.500931   22105 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:40:58.501144   22105 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/bridge-897000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/bridge-897000/disk.qcow2
	I0923 04:40:58.510350   22105 main.go:141] libmachine: STDOUT: 
	I0923 04:40:58.510373   22105 main.go:141] libmachine: STDERR: 
	I0923 04:40:58.510440   22105 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/bridge-897000/disk.qcow2 +20000M
	I0923 04:40:58.518229   22105 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:40:58.518244   22105 main.go:141] libmachine: STDERR: 
	I0923 04:40:58.518260   22105 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/bridge-897000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/bridge-897000/disk.qcow2
	I0923 04:40:58.518267   22105 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:40:58.518282   22105 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:40:58.518321   22105 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/bridge-897000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/bridge-897000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/bridge-897000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:62:5b:f5:0e:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/bridge-897000/disk.qcow2
	I0923 04:40:58.519915   22105 main.go:141] libmachine: STDOUT: 
	I0923 04:40:58.519929   22105 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:40:58.519943   22105 client.go:171] duration metric: took 293.137167ms to LocalClient.Create
	I0923 04:41:00.522119   22105 start.go:128] duration metric: took 2.3634865s to createHost
	I0923 04:41:00.522197   22105 start.go:83] releasing machines lock for "bridge-897000", held for 2.363944584s
	W0923 04:41:00.522586   22105 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-897000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-897000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:41:00.537340   22105 out.go:201] 
	W0923 04:41:00.541457   22105 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:41:00.541483   22105 out.go:270] * 
	* 
	W0923 04:41:00.544170   22105 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:41:00.556296   22105 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (10.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-897000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-897000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.842963709s)

                                                
                                                
-- stdout --
	* [kubenet-897000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-897000" primary control-plane node in "kubenet-897000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-897000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:41:02.804698   22217 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:41:02.804812   22217 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:41:02.804815   22217 out.go:358] Setting ErrFile to fd 2...
	I0923 04:41:02.804817   22217 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:41:02.804949   22217 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:41:02.806010   22217 out.go:352] Setting JSON to false
	I0923 04:41:02.822218   22217 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9633,"bootTime":1727082029,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:41:02.822289   22217 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:41:02.828467   22217 out.go:177] * [kubenet-897000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:41:02.834185   22217 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 04:41:02.834257   22217 notify.go:220] Checking for updates...
	I0923 04:41:02.841500   22217 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:41:02.844375   22217 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:41:02.848449   22217 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:41:02.851490   22217 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	I0923 04:41:02.854371   22217 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 04:41:02.857743   22217 config.go:182] Loaded profile config "cert-expiration-265000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:41:02.857813   22217 config.go:182] Loaded profile config "multinode-090000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:41:02.857857   22217 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:41:02.862439   22217 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 04:41:02.869450   22217 start.go:297] selected driver: qemu2
	I0923 04:41:02.869454   22217 start.go:901] validating driver "qemu2" against <nil>
	I0923 04:41:02.869459   22217 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 04:41:02.871863   22217 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 04:41:02.875487   22217 out.go:177] * Automatically selected the socket_vmnet network
	I0923 04:41:02.876911   22217 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 04:41:02.876930   22217 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0923 04:41:02.876964   22217 start.go:340] cluster config:
	{Name:kubenet-897000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubenet-897000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:41:02.880814   22217 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:41:02.889491   22217 out.go:177] * Starting "kubenet-897000" primary control-plane node in "kubenet-897000" cluster
	I0923 04:41:02.893415   22217 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 04:41:02.893431   22217 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 04:41:02.893444   22217 cache.go:56] Caching tarball of preloaded images
	I0923 04:41:02.893533   22217 preload.go:172] Found /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 04:41:02.893539   22217 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 04:41:02.893597   22217 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/kubenet-897000/config.json ...
	I0923 04:41:02.893608   22217 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/kubenet-897000/config.json: {Name:mk12ed96bd7ae769946e4c28aab83f413f7ddeef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 04:41:02.893823   22217 start.go:360] acquireMachinesLock for kubenet-897000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:41:02.893855   22217 start.go:364] duration metric: took 27.083µs to acquireMachinesLock for "kubenet-897000"
	I0923 04:41:02.893868   22217 start.go:93] Provisioning new machine with config: &{Name:kubenet-897000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-897000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:41:02.893893   22217 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:41:02.901445   22217 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 04:41:02.919104   22217 start.go:159] libmachine.API.Create for "kubenet-897000" (driver="qemu2")
	I0923 04:41:02.919138   22217 client.go:168] LocalClient.Create starting
	I0923 04:41:02.919228   22217 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:41:02.919259   22217 main.go:141] libmachine: Decoding PEM data...
	I0923 04:41:02.919269   22217 main.go:141] libmachine: Parsing certificate...
	I0923 04:41:02.919307   22217 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:41:02.919332   22217 main.go:141] libmachine: Decoding PEM data...
	I0923 04:41:02.919341   22217 main.go:141] libmachine: Parsing certificate...
	I0923 04:41:02.919755   22217 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:41:03.085270   22217 main.go:141] libmachine: Creating SSH key...
	I0923 04:41:03.188286   22217 main.go:141] libmachine: Creating Disk image...
	I0923 04:41:03.188292   22217 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:41:03.188477   22217 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubenet-897000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubenet-897000/disk.qcow2
	I0923 04:41:03.197798   22217 main.go:141] libmachine: STDOUT: 
	I0923 04:41:03.197814   22217 main.go:141] libmachine: STDERR: 
	I0923 04:41:03.197878   22217 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubenet-897000/disk.qcow2 +20000M
	I0923 04:41:03.205869   22217 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:41:03.205896   22217 main.go:141] libmachine: STDERR: 
	I0923 04:41:03.205909   22217 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubenet-897000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubenet-897000/disk.qcow2
	I0923 04:41:03.205914   22217 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:41:03.205923   22217 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:41:03.205953   22217 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubenet-897000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubenet-897000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubenet-897000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:11:bd:bb:8b:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubenet-897000/disk.qcow2
	I0923 04:41:03.207595   22217 main.go:141] libmachine: STDOUT: 
	I0923 04:41:03.207614   22217 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:41:03.207640   22217 client.go:171] duration metric: took 288.497292ms to LocalClient.Create
	I0923 04:41:05.209807   22217 start.go:128] duration metric: took 2.315902834s to createHost
	I0923 04:41:05.209873   22217 start.go:83] releasing machines lock for "kubenet-897000", held for 2.316018875s
	W0923 04:41:05.209938   22217 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:41:05.225127   22217 out.go:177] * Deleting "kubenet-897000" in qemu2 ...
	W0923 04:41:05.257264   22217 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:41:05.257289   22217 start.go:729] Will try again in 5 seconds ...
	I0923 04:41:10.259488   22217 start.go:360] acquireMachinesLock for kubenet-897000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:41:10.260042   22217 start.go:364] duration metric: took 405.166µs to acquireMachinesLock for "kubenet-897000"
	I0923 04:41:10.260146   22217 start.go:93] Provisioning new machine with config: &{Name:kubenet-897000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-897000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:41:10.260453   22217 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:41:10.277029   22217 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 04:41:10.327326   22217 start.go:159] libmachine.API.Create for "kubenet-897000" (driver="qemu2")
	I0923 04:41:10.327376   22217 client.go:168] LocalClient.Create starting
	I0923 04:41:10.327525   22217 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:41:10.327597   22217 main.go:141] libmachine: Decoding PEM data...
	I0923 04:41:10.327615   22217 main.go:141] libmachine: Parsing certificate...
	I0923 04:41:10.327681   22217 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:41:10.327727   22217 main.go:141] libmachine: Decoding PEM data...
	I0923 04:41:10.327749   22217 main.go:141] libmachine: Parsing certificate...
	I0923 04:41:10.328360   22217 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:41:10.502969   22217 main.go:141] libmachine: Creating SSH key...
	I0923 04:41:10.551895   22217 main.go:141] libmachine: Creating Disk image...
	I0923 04:41:10.551901   22217 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:41:10.552125   22217 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubenet-897000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubenet-897000/disk.qcow2
	I0923 04:41:10.561307   22217 main.go:141] libmachine: STDOUT: 
	I0923 04:41:10.561333   22217 main.go:141] libmachine: STDERR: 
	I0923 04:41:10.561396   22217 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubenet-897000/disk.qcow2 +20000M
	I0923 04:41:10.569235   22217 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:41:10.569249   22217 main.go:141] libmachine: STDERR: 
	I0923 04:41:10.569262   22217 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubenet-897000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubenet-897000/disk.qcow2
	I0923 04:41:10.569268   22217 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:41:10.569277   22217 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:41:10.569309   22217 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubenet-897000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubenet-897000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubenet-897000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:cc:2e:2c:71:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/kubenet-897000/disk.qcow2
	I0923 04:41:10.570928   22217 main.go:141] libmachine: STDOUT: 
	I0923 04:41:10.570942   22217 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:41:10.570956   22217 client.go:171] duration metric: took 243.5745ms to LocalClient.Create
	I0923 04:41:12.573123   22217 start.go:128] duration metric: took 2.312647334s to createHost
	I0923 04:41:12.573198   22217 start.go:83] releasing machines lock for "kubenet-897000", held for 2.313141708s
	W0923 04:41:12.573565   22217 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-897000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-897000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:41:12.587368   22217 out.go:201] 
	W0923 04:41:12.589270   22217 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:41:12.589307   22217 out.go:270] * 
	* 
	W0923 04:41:12.591853   22217 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:41:12.605290   22217 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-897000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-897000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.794642292s)

                                                
                                                
-- stdout --
	* [custom-flannel-897000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-897000" primary control-plane node in "custom-flannel-897000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-897000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:41:14.830792   22326 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:41:14.830917   22326 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:41:14.830923   22326 out.go:358] Setting ErrFile to fd 2...
	I0923 04:41:14.830925   22326 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:41:14.831061   22326 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:41:14.832149   22326 out.go:352] Setting JSON to false
	I0923 04:41:14.848281   22326 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9645,"bootTime":1727082029,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:41:14.848344   22326 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:41:14.853200   22326 out.go:177] * [custom-flannel-897000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:41:14.858672   22326 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 04:41:14.858739   22326 notify.go:220] Checking for updates...
	I0923 04:41:14.866108   22326 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:41:14.869005   22326 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:41:14.872148   22326 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:41:14.875126   22326 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	I0923 04:41:14.876538   22326 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 04:41:14.880444   22326 config.go:182] Loaded profile config "cert-expiration-265000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:41:14.880511   22326 config.go:182] Loaded profile config "multinode-090000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:41:14.880557   22326 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:41:14.885101   22326 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 04:41:14.890092   22326 start.go:297] selected driver: qemu2
	I0923 04:41:14.890098   22326 start.go:901] validating driver "qemu2" against <nil>
	I0923 04:41:14.890105   22326 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 04:41:14.892341   22326 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 04:41:14.896100   22326 out.go:177] * Automatically selected the socket_vmnet network
	I0923 04:41:14.897462   22326 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 04:41:14.897481   22326 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0923 04:41:14.897497   22326 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0923 04:41:14.897538   22326 start.go:340] cluster config:
	{Name:custom-flannel-897000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-897000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:41:14.901023   22326 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:41:14.909123   22326 out.go:177] * Starting "custom-flannel-897000" primary control-plane node in "custom-flannel-897000" cluster
	I0923 04:41:14.913044   22326 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 04:41:14.913059   22326 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 04:41:14.913070   22326 cache.go:56] Caching tarball of preloaded images
	I0923 04:41:14.913133   22326 preload.go:172] Found /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 04:41:14.913140   22326 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 04:41:14.913210   22326 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/custom-flannel-897000/config.json ...
	I0923 04:41:14.913221   22326 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/custom-flannel-897000/config.json: {Name:mke2adeff2f60e0d96dff6fb8c20c6d04f20ab58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 04:41:14.913435   22326 start.go:360] acquireMachinesLock for custom-flannel-897000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:41:14.913470   22326 start.go:364] duration metric: took 27.792µs to acquireMachinesLock for "custom-flannel-897000"
	I0923 04:41:14.913484   22326 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-897000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-897000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:41:14.913511   22326 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:41:14.921102   22326 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 04:41:14.938737   22326 start.go:159] libmachine.API.Create for "custom-flannel-897000" (driver="qemu2")
	I0923 04:41:14.938768   22326 client.go:168] LocalClient.Create starting
	I0923 04:41:14.938828   22326 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:41:14.938859   22326 main.go:141] libmachine: Decoding PEM data...
	I0923 04:41:14.938869   22326 main.go:141] libmachine: Parsing certificate...
	I0923 04:41:14.938905   22326 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:41:14.938927   22326 main.go:141] libmachine: Decoding PEM data...
	I0923 04:41:14.938935   22326 main.go:141] libmachine: Parsing certificate...
	I0923 04:41:14.939339   22326 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:41:15.104044   22326 main.go:141] libmachine: Creating SSH key...
	I0923 04:41:15.192390   22326 main.go:141] libmachine: Creating Disk image...
	I0923 04:41:15.192400   22326 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:41:15.192654   22326 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/custom-flannel-897000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/custom-flannel-897000/disk.qcow2
	I0923 04:41:15.201726   22326 main.go:141] libmachine: STDOUT: 
	I0923 04:41:15.201746   22326 main.go:141] libmachine: STDERR: 
	I0923 04:41:15.201803   22326 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/custom-flannel-897000/disk.qcow2 +20000M
	I0923 04:41:15.209602   22326 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:41:15.209617   22326 main.go:141] libmachine: STDERR: 
	I0923 04:41:15.209630   22326 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/custom-flannel-897000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/custom-flannel-897000/disk.qcow2
	I0923 04:41:15.209635   22326 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:41:15.209647   22326 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:41:15.209678   22326 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/custom-flannel-897000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/custom-flannel-897000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/custom-flannel-897000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:bb:0c:dd:2b:4c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/custom-flannel-897000/disk.qcow2
	I0923 04:41:15.211303   22326 main.go:141] libmachine: STDOUT: 
	I0923 04:41:15.211320   22326 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:41:15.211348   22326 client.go:171] duration metric: took 272.568625ms to LocalClient.Create
	I0923 04:41:17.213518   22326 start.go:128] duration metric: took 2.299995041s to createHost
	I0923 04:41:17.213594   22326 start.go:83] releasing machines lock for "custom-flannel-897000", held for 2.300123708s
	W0923 04:41:17.213660   22326 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:41:17.220145   22326 out.go:177] * Deleting "custom-flannel-897000" in qemu2 ...
	W0923 04:41:17.254059   22326 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:41:17.254080   22326 start.go:729] Will try again in 5 seconds ...
	I0923 04:41:22.256315   22326 start.go:360] acquireMachinesLock for custom-flannel-897000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:41:22.256809   22326 start.go:364] duration metric: took 384µs to acquireMachinesLock for "custom-flannel-897000"
	I0923 04:41:22.256983   22326 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-897000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-897000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:41:22.257255   22326 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:41:22.273133   22326 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 04:41:22.323773   22326 start.go:159] libmachine.API.Create for "custom-flannel-897000" (driver="qemu2")
	I0923 04:41:22.323828   22326 client.go:168] LocalClient.Create starting
	I0923 04:41:22.323933   22326 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:41:22.323996   22326 main.go:141] libmachine: Decoding PEM data...
	I0923 04:41:22.324011   22326 main.go:141] libmachine: Parsing certificate...
	I0923 04:41:22.324084   22326 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:41:22.324129   22326 main.go:141] libmachine: Decoding PEM data...
	I0923 04:41:22.324139   22326 main.go:141] libmachine: Parsing certificate...
	I0923 04:41:22.324644   22326 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:41:22.500586   22326 main.go:141] libmachine: Creating SSH key...
	I0923 04:41:22.530320   22326 main.go:141] libmachine: Creating Disk image...
	I0923 04:41:22.530325   22326 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:41:22.530560   22326 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/custom-flannel-897000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/custom-flannel-897000/disk.qcow2
	I0923 04:41:22.539746   22326 main.go:141] libmachine: STDOUT: 
	I0923 04:41:22.539767   22326 main.go:141] libmachine: STDERR: 
	I0923 04:41:22.539838   22326 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/custom-flannel-897000/disk.qcow2 +20000M
	I0923 04:41:22.547676   22326 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:41:22.547690   22326 main.go:141] libmachine: STDERR: 
	I0923 04:41:22.547701   22326 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/custom-flannel-897000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/custom-flannel-897000/disk.qcow2
	I0923 04:41:22.547706   22326 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:41:22.547718   22326 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:41:22.547746   22326 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/custom-flannel-897000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/custom-flannel-897000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/custom-flannel-897000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:01:b9:1e:bc:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/custom-flannel-897000/disk.qcow2
	I0923 04:41:22.549351   22326 main.go:141] libmachine: STDOUT: 
	I0923 04:41:22.549366   22326 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:41:22.549380   22326 client.go:171] duration metric: took 225.548459ms to LocalClient.Create
	I0923 04:41:24.551544   22326 start.go:128] duration metric: took 2.294268958s to createHost
	I0923 04:41:24.551612   22326 start.go:83] releasing machines lock for "custom-flannel-897000", held for 2.294786s
	W0923 04:41:24.551905   22326 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-897000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-897000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:41:24.565555   22326 out.go:201] 
	W0923 04:41:24.570544   22326 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:41:24.570568   22326 out.go:270] * 
	* 
	W0923 04:41:24.572825   22326 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:41:24.582343   22326 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-897000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-897000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.819713875s)

                                                
                                                
-- stdout --
	* [calico-897000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-897000" primary control-plane node in "calico-897000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-897000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:41:27.032490   22446 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:41:27.032604   22446 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:41:27.032608   22446 out.go:358] Setting ErrFile to fd 2...
	I0923 04:41:27.032610   22446 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:41:27.032743   22446 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:41:27.033835   22446 out.go:352] Setting JSON to false
	I0923 04:41:27.049976   22446 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9658,"bootTime":1727082029,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:41:27.050044   22446 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:41:27.055691   22446 out.go:177] * [calico-897000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:41:27.063641   22446 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 04:41:27.063717   22446 notify.go:220] Checking for updates...
	I0923 04:41:27.069603   22446 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:41:27.072628   22446 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:41:27.074145   22446 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:41:27.077567   22446 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	I0923 04:41:27.080592   22446 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 04:41:27.083986   22446 config.go:182] Loaded profile config "cert-expiration-265000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:41:27.084057   22446 config.go:182] Loaded profile config "multinode-090000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:41:27.084102   22446 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:41:27.087508   22446 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 04:41:27.094595   22446 start.go:297] selected driver: qemu2
	I0923 04:41:27.094601   22446 start.go:901] validating driver "qemu2" against <nil>
	I0923 04:41:27.094609   22446 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 04:41:27.096913   22446 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 04:41:27.101581   22446 out.go:177] * Automatically selected the socket_vmnet network
	I0923 04:41:27.104678   22446 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 04:41:27.104708   22446 cni.go:84] Creating CNI manager for "calico"
	I0923 04:41:27.104714   22446 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0923 04:41:27.104742   22446 start.go:340] cluster config:
	{Name:calico-897000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-897000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:41:27.108453   22446 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:41:27.116551   22446 out.go:177] * Starting "calico-897000" primary control-plane node in "calico-897000" cluster
	I0923 04:41:27.120590   22446 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 04:41:27.120613   22446 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 04:41:27.120622   22446 cache.go:56] Caching tarball of preloaded images
	I0923 04:41:27.120683   22446 preload.go:172] Found /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 04:41:27.120689   22446 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 04:41:27.120739   22446 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/calico-897000/config.json ...
	I0923 04:41:27.120751   22446 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/calico-897000/config.json: {Name:mk8a522692806675788f2fc6491449f66b336348 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 04:41:27.120980   22446 start.go:360] acquireMachinesLock for calico-897000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:41:27.121016   22446 start.go:364] duration metric: took 29.917µs to acquireMachinesLock for "calico-897000"
	I0923 04:41:27.121030   22446 start.go:93] Provisioning new machine with config: &{Name:calico-897000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-897000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:41:27.121056   22446 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:41:27.129591   22446 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 04:41:27.147743   22446 start.go:159] libmachine.API.Create for "calico-897000" (driver="qemu2")
	I0923 04:41:27.147779   22446 client.go:168] LocalClient.Create starting
	I0923 04:41:27.147860   22446 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:41:27.147899   22446 main.go:141] libmachine: Decoding PEM data...
	I0923 04:41:27.147909   22446 main.go:141] libmachine: Parsing certificate...
	I0923 04:41:27.147948   22446 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:41:27.147976   22446 main.go:141] libmachine: Decoding PEM data...
	I0923 04:41:27.147987   22446 main.go:141] libmachine: Parsing certificate...
	I0923 04:41:27.148346   22446 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:41:27.313108   22446 main.go:141] libmachine: Creating SSH key...
	I0923 04:41:27.413963   22446 main.go:141] libmachine: Creating Disk image...
	I0923 04:41:27.413968   22446 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:41:27.414196   22446 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/calico-897000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/calico-897000/disk.qcow2
	I0923 04:41:27.423670   22446 main.go:141] libmachine: STDOUT: 
	I0923 04:41:27.423690   22446 main.go:141] libmachine: STDERR: 
	I0923 04:41:27.423761   22446 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/calico-897000/disk.qcow2 +20000M
	I0923 04:41:27.431670   22446 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:41:27.431686   22446 main.go:141] libmachine: STDERR: 
	I0923 04:41:27.431731   22446 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/calico-897000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/calico-897000/disk.qcow2
	I0923 04:41:27.431736   22446 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:41:27.431746   22446 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:41:27.431777   22446 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/calico-897000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/calico-897000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/calico-897000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:b3:c4:f5:a2:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/calico-897000/disk.qcow2
	I0923 04:41:27.433383   22446 main.go:141] libmachine: STDOUT: 
	I0923 04:41:27.433396   22446 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:41:27.433417   22446 client.go:171] duration metric: took 285.632542ms to LocalClient.Create
	I0923 04:41:29.435623   22446 start.go:128] duration metric: took 2.31454475s to createHost
	I0923 04:41:29.435682   22446 start.go:83] releasing machines lock for "calico-897000", held for 2.31466675s
	W0923 04:41:29.435836   22446 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:41:29.450960   22446 out.go:177] * Deleting "calico-897000" in qemu2 ...
	W0923 04:41:29.483244   22446 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:41:29.483260   22446 start.go:729] Will try again in 5 seconds ...
	I0923 04:41:34.485560   22446 start.go:360] acquireMachinesLock for calico-897000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:41:34.486036   22446 start.go:364] duration metric: took 357.292µs to acquireMachinesLock for "calico-897000"
	I0923 04:41:34.486156   22446 start.go:93] Provisioning new machine with config: &{Name:calico-897000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-897000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:41:34.486393   22446 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:41:34.493156   22446 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 04:41:34.541083   22446 start.go:159] libmachine.API.Create for "calico-897000" (driver="qemu2")
	I0923 04:41:34.541121   22446 client.go:168] LocalClient.Create starting
	I0923 04:41:34.541237   22446 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:41:34.541310   22446 main.go:141] libmachine: Decoding PEM data...
	I0923 04:41:34.541330   22446 main.go:141] libmachine: Parsing certificate...
	I0923 04:41:34.541390   22446 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:41:34.541434   22446 main.go:141] libmachine: Decoding PEM data...
	I0923 04:41:34.541450   22446 main.go:141] libmachine: Parsing certificate...
	I0923 04:41:34.541944   22446 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:41:34.716264   22446 main.go:141] libmachine: Creating SSH key...
	I0923 04:41:34.753619   22446 main.go:141] libmachine: Creating Disk image...
	I0923 04:41:34.753624   22446 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:41:34.753844   22446 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/calico-897000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/calico-897000/disk.qcow2
	I0923 04:41:34.763270   22446 main.go:141] libmachine: STDOUT: 
	I0923 04:41:34.763294   22446 main.go:141] libmachine: STDERR: 
	I0923 04:41:34.763357   22446 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/calico-897000/disk.qcow2 +20000M
	I0923 04:41:34.771258   22446 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:41:34.771277   22446 main.go:141] libmachine: STDERR: 
	I0923 04:41:34.771298   22446 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/calico-897000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/calico-897000/disk.qcow2
	I0923 04:41:34.771304   22446 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:41:34.771316   22446 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:41:34.771345   22446 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/calico-897000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/calico-897000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/calico-897000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:99:70:5e:55:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/calico-897000/disk.qcow2
	I0923 04:41:34.772963   22446 main.go:141] libmachine: STDOUT: 
	I0923 04:41:34.772977   22446 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:41:34.772989   22446 client.go:171] duration metric: took 231.8645ms to LocalClient.Create
	I0923 04:41:36.775162   22446 start.go:128] duration metric: took 2.28873975s to createHost
	I0923 04:41:36.775203   22446 start.go:83] releasing machines lock for "calico-897000", held for 2.289152917s
	W0923 04:41:36.775541   22446 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-897000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-897000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:41:36.790251   22446 out.go:201] 
	W0923 04:41:36.795309   22446 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:41:36.795332   22446 out.go:270] * 
	* 
	W0923 04:41:36.798223   22446 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:41:36.810185   22446 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-897000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-897000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.852883709s)

                                                
                                                
-- stdout --
	* [false-897000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-897000" primary control-plane node in "false-897000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-897000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:41:39.280538   22566 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:41:39.280693   22566 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:41:39.280697   22566 out.go:358] Setting ErrFile to fd 2...
	I0923 04:41:39.280699   22566 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:41:39.280838   22566 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:41:39.281908   22566 out.go:352] Setting JSON to false
	I0923 04:41:39.298161   22566 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9670,"bootTime":1727082029,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:41:39.298221   22566 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:41:39.304760   22566 out.go:177] * [false-897000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:41:39.313638   22566 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 04:41:39.313711   22566 notify.go:220] Checking for updates...
	I0923 04:41:39.321557   22566 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:41:39.324603   22566 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:41:39.327579   22566 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:41:39.330526   22566 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	I0923 04:41:39.333565   22566 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 04:41:39.335351   22566 config.go:182] Loaded profile config "cert-expiration-265000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:41:39.335428   22566 config.go:182] Loaded profile config "multinode-090000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:41:39.335473   22566 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:41:39.338528   22566 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 04:41:39.345385   22566 start.go:297] selected driver: qemu2
	I0923 04:41:39.345392   22566 start.go:901] validating driver "qemu2" against <nil>
	I0923 04:41:39.345400   22566 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 04:41:39.347852   22566 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 04:41:39.351500   22566 out.go:177] * Automatically selected the socket_vmnet network
	I0923 04:41:39.355564   22566 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 04:41:39.355582   22566 cni.go:84] Creating CNI manager for "false"
	I0923 04:41:39.355609   22566 start.go:340] cluster config:
	{Name:false-897000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:false-897000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:41:39.359444   22566 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:41:39.367522   22566 out.go:177] * Starting "false-897000" primary control-plane node in "false-897000" cluster
	I0923 04:41:39.371507   22566 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 04:41:39.371533   22566 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 04:41:39.371541   22566 cache.go:56] Caching tarball of preloaded images
	I0923 04:41:39.371603   22566 preload.go:172] Found /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 04:41:39.371609   22566 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 04:41:39.371665   22566 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/false-897000/config.json ...
	I0923 04:41:39.371676   22566 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/false-897000/config.json: {Name:mk0b4c7abc2863b7bde60729f7ad0c6ed3d75abf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 04:41:39.371906   22566 start.go:360] acquireMachinesLock for false-897000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:41:39.371942   22566 start.go:364] duration metric: took 29.959µs to acquireMachinesLock for "false-897000"
	I0923 04:41:39.371956   22566 start.go:93] Provisioning new machine with config: &{Name:false-897000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-897000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:41:39.371995   22566 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:41:39.380584   22566 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 04:41:39.398713   22566 start.go:159] libmachine.API.Create for "false-897000" (driver="qemu2")
	I0923 04:41:39.398742   22566 client.go:168] LocalClient.Create starting
	I0923 04:41:39.398803   22566 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:41:39.398832   22566 main.go:141] libmachine: Decoding PEM data...
	I0923 04:41:39.398842   22566 main.go:141] libmachine: Parsing certificate...
	I0923 04:41:39.398886   22566 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:41:39.398911   22566 main.go:141] libmachine: Decoding PEM data...
	I0923 04:41:39.398922   22566 main.go:141] libmachine: Parsing certificate...
	I0923 04:41:39.399308   22566 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:41:39.564127   22566 main.go:141] libmachine: Creating SSH key...
	I0923 04:41:39.595541   22566 main.go:141] libmachine: Creating Disk image...
	I0923 04:41:39.595547   22566 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:41:39.595767   22566 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/false-897000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/false-897000/disk.qcow2
	I0923 04:41:39.604892   22566 main.go:141] libmachine: STDOUT: 
	I0923 04:41:39.604916   22566 main.go:141] libmachine: STDERR: 
	I0923 04:41:39.604978   22566 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/false-897000/disk.qcow2 +20000M
	I0923 04:41:39.612790   22566 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:41:39.612811   22566 main.go:141] libmachine: STDERR: 
	I0923 04:41:39.612828   22566 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/false-897000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/false-897000/disk.qcow2
	I0923 04:41:39.612833   22566 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:41:39.612846   22566 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:41:39.612879   22566 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/false-897000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/false-897000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/false-897000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:94:98:80:3a:76 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/false-897000/disk.qcow2
	I0923 04:41:39.614518   22566 main.go:141] libmachine: STDOUT: 
	I0923 04:41:39.614536   22566 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:41:39.614556   22566 client.go:171] duration metric: took 215.810417ms to LocalClient.Create
	I0923 04:41:41.616723   22566 start.go:128] duration metric: took 2.244715458s to createHost
	I0923 04:41:41.616782   22566 start.go:83] releasing machines lock for "false-897000", held for 2.244841s
	W0923 04:41:41.616856   22566 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:41:41.627051   22566 out.go:177] * Deleting "false-897000" in qemu2 ...
	W0923 04:41:41.663584   22566 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:41:41.663605   22566 start.go:729] Will try again in 5 seconds ...
	I0923 04:41:46.665752   22566 start.go:360] acquireMachinesLock for false-897000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:41:46.666164   22566 start.go:364] duration metric: took 332.208µs to acquireMachinesLock for "false-897000"
	I0923 04:41:46.666284   22566 start.go:93] Provisioning new machine with config: &{Name:false-897000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-897000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:41:46.666614   22566 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:41:46.686430   22566 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0923 04:41:46.736691   22566 start.go:159] libmachine.API.Create for "false-897000" (driver="qemu2")
	I0923 04:41:46.736737   22566 client.go:168] LocalClient.Create starting
	I0923 04:41:46.736852   22566 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:41:46.736921   22566 main.go:141] libmachine: Decoding PEM data...
	I0923 04:41:46.736938   22566 main.go:141] libmachine: Parsing certificate...
	I0923 04:41:46.736999   22566 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:41:46.737044   22566 main.go:141] libmachine: Decoding PEM data...
	I0923 04:41:46.737062   22566 main.go:141] libmachine: Parsing certificate...
	I0923 04:41:46.737624   22566 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:41:46.912520   22566 main.go:141] libmachine: Creating SSH key...
	I0923 04:41:47.032402   22566 main.go:141] libmachine: Creating Disk image...
	I0923 04:41:47.032407   22566 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:41:47.032640   22566 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/false-897000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/false-897000/disk.qcow2
	I0923 04:41:47.042208   22566 main.go:141] libmachine: STDOUT: 
	I0923 04:41:47.042227   22566 main.go:141] libmachine: STDERR: 
	I0923 04:41:47.042290   22566 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/false-897000/disk.qcow2 +20000M
	I0923 04:41:47.050530   22566 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:41:47.050562   22566 main.go:141] libmachine: STDERR: 
	I0923 04:41:47.050576   22566 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/false-897000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/false-897000/disk.qcow2
	I0923 04:41:47.050581   22566 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:41:47.050589   22566 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:41:47.050619   22566 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/false-897000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/false-897000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/false-897000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:7d:ab:d2:cf:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/false-897000/disk.qcow2
	I0923 04:41:47.052335   22566 main.go:141] libmachine: STDOUT: 
	I0923 04:41:47.052351   22566 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:41:47.052364   22566 client.go:171] duration metric: took 315.622333ms to LocalClient.Create
	I0923 04:41:49.054526   22566 start.go:128] duration metric: took 2.387897333s to createHost
	I0923 04:41:49.054608   22566 start.go:83] releasing machines lock for "false-897000", held for 2.388430916s
	W0923 04:41:49.055092   22566 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-897000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-897000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:41:49.070846   22566 out.go:201] 
	W0923 04:41:49.075898   22566 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:41:49.075923   22566 out.go:270] * 
	* 
	W0923 04:41:49.078749   22566 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:41:49.090799   22566 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-579000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-579000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (10.00728375s)

                                                
                                                
-- stdout --
	* [old-k8s-version-579000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-579000" primary control-plane node in "old-k8s-version-579000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-579000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:41:51.339490   22678 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:41:51.339611   22678 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:41:51.339614   22678 out.go:358] Setting ErrFile to fd 2...
	I0923 04:41:51.339617   22678 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:41:51.339744   22678 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:41:51.340873   22678 out.go:352] Setting JSON to false
	I0923 04:41:51.357017   22678 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9682,"bootTime":1727082029,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:41:51.357091   22678 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:41:51.362760   22678 out.go:177] * [old-k8s-version-579000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:41:51.369746   22678 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 04:41:51.369810   22678 notify.go:220] Checking for updates...
	I0923 04:41:51.376628   22678 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:41:51.379717   22678 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:41:51.383777   22678 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:41:51.386693   22678 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	I0923 04:41:51.389749   22678 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 04:41:51.393052   22678 config.go:182] Loaded profile config "cert-expiration-265000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:41:51.393120   22678 config.go:182] Loaded profile config "multinode-090000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:41:51.393163   22678 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:41:51.397717   22678 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 04:41:51.404724   22678 start.go:297] selected driver: qemu2
	I0923 04:41:51.404730   22678 start.go:901] validating driver "qemu2" against <nil>
	I0923 04:41:51.404737   22678 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 04:41:51.406994   22678 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 04:41:51.409771   22678 out.go:177] * Automatically selected the socket_vmnet network
	I0923 04:41:51.412830   22678 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 04:41:51.412859   22678 cni.go:84] Creating CNI manager for ""
	I0923 04:41:51.412881   22678 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0923 04:41:51.412918   22678 start.go:340] cluster config:
	{Name:old-k8s-version-579000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-579000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:41:51.416622   22678 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:41:51.423703   22678 out.go:177] * Starting "old-k8s-version-579000" primary control-plane node in "old-k8s-version-579000" cluster
	I0923 04:41:51.427737   22678 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 04:41:51.427754   22678 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0923 04:41:51.427761   22678 cache.go:56] Caching tarball of preloaded images
	I0923 04:41:51.427841   22678 preload.go:172] Found /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 04:41:51.427854   22678 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0923 04:41:51.427921   22678 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/old-k8s-version-579000/config.json ...
	I0923 04:41:51.427933   22678 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/old-k8s-version-579000/config.json: {Name:mk26217af2e55be1bd092ca6385e345217c27adb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 04:41:51.428171   22678 start.go:360] acquireMachinesLock for old-k8s-version-579000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:41:51.428212   22678 start.go:364] duration metric: took 33.583µs to acquireMachinesLock for "old-k8s-version-579000"
	I0923 04:41:51.428225   22678 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-579000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-579000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:41:51.428251   22678 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:41:51.431736   22678 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 04:41:51.449647   22678 start.go:159] libmachine.API.Create for "old-k8s-version-579000" (driver="qemu2")
	I0923 04:41:51.449675   22678 client.go:168] LocalClient.Create starting
	I0923 04:41:51.449737   22678 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:41:51.449771   22678 main.go:141] libmachine: Decoding PEM data...
	I0923 04:41:51.449782   22678 main.go:141] libmachine: Parsing certificate...
	I0923 04:41:51.449817   22678 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:41:51.449840   22678 main.go:141] libmachine: Decoding PEM data...
	I0923 04:41:51.449848   22678 main.go:141] libmachine: Parsing certificate...
	I0923 04:41:51.450255   22678 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:41:51.617545   22678 main.go:141] libmachine: Creating SSH key...
	I0923 04:41:51.787307   22678 main.go:141] libmachine: Creating Disk image...
	I0923 04:41:51.787313   22678 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:41:51.787547   22678 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/old-k8s-version-579000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/old-k8s-version-579000/disk.qcow2
	I0923 04:41:51.797036   22678 main.go:141] libmachine: STDOUT: 
	I0923 04:41:51.797052   22678 main.go:141] libmachine: STDERR: 
	I0923 04:41:51.797105   22678 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/old-k8s-version-579000/disk.qcow2 +20000M
	I0923 04:41:51.804969   22678 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:41:51.804991   22678 main.go:141] libmachine: STDERR: 
	I0923 04:41:51.805004   22678 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/old-k8s-version-579000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/old-k8s-version-579000/disk.qcow2
	I0923 04:41:51.805013   22678 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:41:51.805022   22678 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:41:51.805055   22678 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/old-k8s-version-579000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/old-k8s-version-579000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/old-k8s-version-579000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:ae:19:b0:e6:80 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/old-k8s-version-579000/disk.qcow2
	I0923 04:41:51.806680   22678 main.go:141] libmachine: STDOUT: 
	I0923 04:41:51.806694   22678 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:41:51.806712   22678 client.go:171] duration metric: took 357.03325ms to LocalClient.Create
	I0923 04:41:53.808923   22678 start.go:128] duration metric: took 2.380614541s to createHost
	I0923 04:41:53.809021   22678 start.go:83] releasing machines lock for "old-k8s-version-579000", held for 2.380810042s
	W0923 04:41:53.809088   22678 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:41:53.822029   22678 out.go:177] * Deleting "old-k8s-version-579000" in qemu2 ...
	W0923 04:41:53.856313   22678 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:41:53.856330   22678 start.go:729] Will try again in 5 seconds ...
	I0923 04:41:58.858707   22678 start.go:360] acquireMachinesLock for old-k8s-version-579000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:41:58.859212   22678 start.go:364] duration metric: took 383.125µs to acquireMachinesLock for "old-k8s-version-579000"
	I0923 04:41:58.859339   22678 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-579000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-579000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:41:58.859620   22678 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:41:58.865349   22678 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 04:41:58.916336   22678 start.go:159] libmachine.API.Create for "old-k8s-version-579000" (driver="qemu2")
	I0923 04:41:58.916391   22678 client.go:168] LocalClient.Create starting
	I0923 04:41:58.916514   22678 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:41:58.916577   22678 main.go:141] libmachine: Decoding PEM data...
	I0923 04:41:58.916593   22678 main.go:141] libmachine: Parsing certificate...
	I0923 04:41:58.916656   22678 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:41:58.916701   22678 main.go:141] libmachine: Decoding PEM data...
	I0923 04:41:58.916717   22678 main.go:141] libmachine: Parsing certificate...
	I0923 04:41:58.917242   22678 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:41:59.091249   22678 main.go:141] libmachine: Creating SSH key...
	I0923 04:41:59.251611   22678 main.go:141] libmachine: Creating Disk image...
	I0923 04:41:59.251620   22678 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:41:59.251840   22678 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/old-k8s-version-579000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/old-k8s-version-579000/disk.qcow2
	I0923 04:41:59.262334   22678 main.go:141] libmachine: STDOUT: 
	I0923 04:41:59.262389   22678 main.go:141] libmachine: STDERR: 
	I0923 04:41:59.262458   22678 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/old-k8s-version-579000/disk.qcow2 +20000M
	I0923 04:41:59.270483   22678 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:41:59.270496   22678 main.go:141] libmachine: STDERR: 
	I0923 04:41:59.270509   22678 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/old-k8s-version-579000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/old-k8s-version-579000/disk.qcow2
	I0923 04:41:59.270515   22678 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:41:59.270528   22678 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:41:59.270561   22678 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/old-k8s-version-579000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/old-k8s-version-579000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/old-k8s-version-579000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:6d:be:49:42:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/old-k8s-version-579000/disk.qcow2
	I0923 04:41:59.272255   22678 main.go:141] libmachine: STDOUT: 
	I0923 04:41:59.272269   22678 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:41:59.272285   22678 client.go:171] duration metric: took 355.889542ms to LocalClient.Create
	I0923 04:42:01.274495   22678 start.go:128] duration metric: took 2.414842208s to createHost
	I0923 04:42:01.274582   22678 start.go:83] releasing machines lock for "old-k8s-version-579000", held for 2.415356083s
	W0923 04:42:01.274971   22678 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-579000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-579000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:42:01.286431   22678 out.go:201] 
	W0923 04:42:01.290881   22678 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:42:01.290906   22678 out.go:270] * 
	* 
	W0923 04:42:01.293782   22678 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:42:01.302745   22678 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-579000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-579000 -n old-k8s-version-579000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-579000 -n old-k8s-version-579000: exit status 7 (69.965625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-579000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-579000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-579000 create -f testdata/busybox.yaml: exit status 1 (30.094417ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-579000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-579000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-579000 -n old-k8s-version-579000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-579000 -n old-k8s-version-579000: exit status 7 (30.803583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-579000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-579000 -n old-k8s-version-579000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-579000 -n old-k8s-version-579000: exit status 7 (31.422709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-579000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-579000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-579000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-579000 describe deploy/metrics-server -n kube-system: exit status 1 (26.777791ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-579000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-579000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-579000 -n old-k8s-version-579000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-579000 -n old-k8s-version-579000: exit status 7 (30.947917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-579000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-579000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-579000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.195601542s)

                                                
                                                
-- stdout --
	* [old-k8s-version-579000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-579000" primary control-plane node in "old-k8s-version-579000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-579000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-579000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:42:03.765707   22725 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:42:03.765829   22725 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:42:03.765832   22725 out.go:358] Setting ErrFile to fd 2...
	I0923 04:42:03.765836   22725 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:42:03.765966   22725 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:42:03.767029   22725 out.go:352] Setting JSON to false
	I0923 04:42:03.783037   22725 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9694,"bootTime":1727082029,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:42:03.783120   22725 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:42:03.786774   22725 out.go:177] * [old-k8s-version-579000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:42:03.794710   22725 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 04:42:03.794785   22725 notify.go:220] Checking for updates...
	I0923 04:42:03.802862   22725 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:42:03.806833   22725 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:42:03.809893   22725 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:42:03.812889   22725 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	I0923 04:42:03.815841   22725 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 04:42:03.819135   22725 config.go:182] Loaded profile config "old-k8s-version-579000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0923 04:42:03.822810   22725 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0923 04:42:03.825880   22725 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:42:03.829787   22725 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 04:42:03.835778   22725 start.go:297] selected driver: qemu2
	I0923 04:42:03.835783   22725 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-579000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-579000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:42:03.835842   22725 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 04:42:03.838269   22725 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 04:42:03.838295   22725 cni.go:84] Creating CNI manager for ""
	I0923 04:42:03.838316   22725 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0923 04:42:03.838334   22725 start.go:340] cluster config:
	{Name:old-k8s-version-579000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-579000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:42:03.841895   22725 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:42:03.850915   22725 out.go:177] * Starting "old-k8s-version-579000" primary control-plane node in "old-k8s-version-579000" cluster
	I0923 04:42:03.854836   22725 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 04:42:03.854853   22725 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0923 04:42:03.854861   22725 cache.go:56] Caching tarball of preloaded images
	I0923 04:42:03.854932   22725 preload.go:172] Found /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 04:42:03.854938   22725 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0923 04:42:03.854998   22725 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/old-k8s-version-579000/config.json ...
	I0923 04:42:03.855486   22725 start.go:360] acquireMachinesLock for old-k8s-version-579000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:42:03.855516   22725 start.go:364] duration metric: took 24.042µs to acquireMachinesLock for "old-k8s-version-579000"
	I0923 04:42:03.855526   22725 start.go:96] Skipping create...Using existing machine configuration
	I0923 04:42:03.855531   22725 fix.go:54] fixHost starting: 
	I0923 04:42:03.855654   22725 fix.go:112] recreateIfNeeded on old-k8s-version-579000: state=Stopped err=<nil>
	W0923 04:42:03.855664   22725 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 04:42:03.858767   22725 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-579000" ...
	I0923 04:42:03.866796   22725 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:42:03.866827   22725 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/old-k8s-version-579000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/old-k8s-version-579000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/old-k8s-version-579000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:6d:be:49:42:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/old-k8s-version-579000/disk.qcow2
	I0923 04:42:03.868874   22725 main.go:141] libmachine: STDOUT: 
	I0923 04:42:03.868895   22725 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:42:03.868925   22725 fix.go:56] duration metric: took 13.392833ms for fixHost
	I0923 04:42:03.868931   22725 start.go:83] releasing machines lock for "old-k8s-version-579000", held for 13.410042ms
	W0923 04:42:03.868939   22725 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:42:03.868977   22725 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:42:03.868982   22725 start.go:729] Will try again in 5 seconds ...
	I0923 04:42:08.871202   22725 start.go:360] acquireMachinesLock for old-k8s-version-579000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:42:08.871588   22725 start.go:364] duration metric: took 296.166µs to acquireMachinesLock for "old-k8s-version-579000"
	I0923 04:42:08.871707   22725 start.go:96] Skipping create...Using existing machine configuration
	I0923 04:42:08.871725   22725 fix.go:54] fixHost starting: 
	I0923 04:42:08.872522   22725 fix.go:112] recreateIfNeeded on old-k8s-version-579000: state=Stopped err=<nil>
	W0923 04:42:08.872546   22725 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 04:42:08.881943   22725 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-579000" ...
	I0923 04:42:08.886779   22725 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:42:08.887018   22725 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/old-k8s-version-579000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/old-k8s-version-579000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/old-k8s-version-579000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:6d:be:49:42:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/old-k8s-version-579000/disk.qcow2
	I0923 04:42:08.896054   22725 main.go:141] libmachine: STDOUT: 
	I0923 04:42:08.896109   22725 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:42:08.896170   22725 fix.go:56] duration metric: took 24.443791ms for fixHost
	I0923 04:42:08.896189   22725 start.go:83] releasing machines lock for "old-k8s-version-579000", held for 24.576667ms
	W0923 04:42:08.896362   22725 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-579000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-579000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:42:08.903915   22725 out.go:201] 
	W0923 04:42:08.907980   22725 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:42:08.908003   22725 out.go:270] * 
	* 
	W0923 04:42:08.910618   22725 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:42:08.919010   22725 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-579000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-579000 -n old-k8s-version-579000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-579000 -n old-k8s-version-579000: exit status 7 (68.882916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-579000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-579000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-579000 -n old-k8s-version-579000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-579000 -n old-k8s-version-579000: exit status 7 (33.1105ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-579000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-579000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-579000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-579000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.332541ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-579000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-579000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-579000 -n old-k8s-version-579000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-579000 -n old-k8s-version-579000: exit status 7 (30.842542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-579000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-579000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-579000 -n old-k8s-version-579000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-579000 -n old-k8s-version-579000: exit status 7 (30.891166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-579000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-579000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-579000 --alsologtostderr -v=1: exit status 83 (43.879208ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-579000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-579000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:42:09.194936   22746 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:42:09.195360   22746 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:42:09.195364   22746 out.go:358] Setting ErrFile to fd 2...
	I0923 04:42:09.195367   22746 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:42:09.195563   22746 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:42:09.195775   22746 out.go:352] Setting JSON to false
	I0923 04:42:09.195782   22746 mustload.go:65] Loading cluster: old-k8s-version-579000
	I0923 04:42:09.196003   22746 config.go:182] Loaded profile config "old-k8s-version-579000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0923 04:42:09.200941   22746 out.go:177] * The control-plane node old-k8s-version-579000 host is not running: state=Stopped
	I0923 04:42:09.204718   22746 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-579000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-579000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-579000 -n old-k8s-version-579000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-579000 -n old-k8s-version-579000: exit status 7 (30.8095ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-579000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-579000 -n old-k8s-version-579000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-579000 -n old-k8s-version-579000: exit status 7 (30.522875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-579000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-836000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-836000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.779611334s)

                                                
                                                
-- stdout --
	* [no-preload-836000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-836000" primary control-plane node in "no-preload-836000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-836000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:42:09.519870   22763 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:42:09.520026   22763 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:42:09.520029   22763 out.go:358] Setting ErrFile to fd 2...
	I0923 04:42:09.520032   22763 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:42:09.520150   22763 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:42:09.521217   22763 out.go:352] Setting JSON to false
	I0923 04:42:09.537408   22763 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9700,"bootTime":1727082029,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:42:09.537474   22763 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:42:09.540932   22763 out.go:177] * [no-preload-836000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:42:09.546893   22763 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 04:42:09.546934   22763 notify.go:220] Checking for updates...
	I0923 04:42:09.553846   22763 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:42:09.556901   22763 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:42:09.559794   22763 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:42:09.562804   22763 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	I0923 04:42:09.565862   22763 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 04:42:09.569118   22763 config.go:182] Loaded profile config "cert-expiration-265000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:42:09.569175   22763 config.go:182] Loaded profile config "multinode-090000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:42:09.569227   22763 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:42:09.573773   22763 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 04:42:09.579829   22763 start.go:297] selected driver: qemu2
	I0923 04:42:09.579835   22763 start.go:901] validating driver "qemu2" against <nil>
	I0923 04:42:09.579842   22763 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 04:42:09.582156   22763 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 04:42:09.585871   22763 out.go:177] * Automatically selected the socket_vmnet network
	I0923 04:42:09.588897   22763 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 04:42:09.588913   22763 cni.go:84] Creating CNI manager for ""
	I0923 04:42:09.588934   22763 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 04:42:09.588940   22763 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 04:42:09.588975   22763 start.go:340] cluster config:
	{Name:no-preload-836000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-836000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:42:09.592822   22763 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:42:09.600844   22763 out.go:177] * Starting "no-preload-836000" primary control-plane node in "no-preload-836000" cluster
	I0923 04:42:09.604900   22763 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 04:42:09.604973   22763 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/no-preload-836000/config.json ...
	I0923 04:42:09.604994   22763 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/no-preload-836000/config.json: {Name:mk0520898b85de4ae24c4c358f31adf1e2e951b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 04:42:09.605029   22763 cache.go:107] acquiring lock: {Name:mk12ffd255a263dbbb1b963a6e29b44678e5a8a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:42:09.605028   22763 cache.go:107] acquiring lock: {Name:mk5c1b72e897512babb874e0511d668fd4f4a8e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:42:09.605036   22763 cache.go:107] acquiring lock: {Name:mk149f78b192b6198ebee9e7840058ae5a096258 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:42:09.605109   22763 cache.go:115] /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0923 04:42:09.605116   22763 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 83.417µs
	I0923 04:42:09.605123   22763 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0923 04:42:09.605129   22763 cache.go:107] acquiring lock: {Name:mk8a2a1707ab33ec2f0db2209c5df806d3e4956b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:42:09.605165   22763 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0923 04:42:09.605180   22763 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0923 04:42:09.605205   22763 cache.go:107] acquiring lock: {Name:mkf126eb811ef6c315bcd595d36e422ebece1bbf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:42:09.605226   22763 cache.go:107] acquiring lock: {Name:mk3f163ffe8db89396e94c63a72c105d3446f50e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:42:09.605263   22763 cache.go:107] acquiring lock: {Name:mk76c4e7914c46a1315b164693ffbce9cdd86022 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:42:09.605276   22763 cache.go:107] acquiring lock: {Name:mk7eb1f224df61a83d02e0bdef0f40de9af9c1ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:42:09.605290   22763 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0923 04:42:09.605255   22763 start.go:360] acquireMachinesLock for no-preload-836000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:42:09.605377   22763 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0923 04:42:09.605437   22763 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0923 04:42:09.605438   22763 start.go:364] duration metric: took 109.666µs to acquireMachinesLock for "no-preload-836000"
	I0923 04:42:09.605475   22763 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0923 04:42:09.605499   22763 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0923 04:42:09.605451   22763 start.go:93] Provisioning new machine with config: &{Name:no-preload-836000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-836000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:42:09.605558   22763 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:42:09.613815   22763 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 04:42:09.616827   22763 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0923 04:42:09.617180   22763 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0923 04:42:09.617483   22763 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0923 04:42:09.617511   22763 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0923 04:42:09.619391   22763 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0923 04:42:09.619402   22763 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0923 04:42:09.619430   22763 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0923 04:42:09.632701   22763 start.go:159] libmachine.API.Create for "no-preload-836000" (driver="qemu2")
	I0923 04:42:09.632728   22763 client.go:168] LocalClient.Create starting
	I0923 04:42:09.632916   22763 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:42:09.632954   22763 main.go:141] libmachine: Decoding PEM data...
	I0923 04:42:09.632965   22763 main.go:141] libmachine: Parsing certificate...
	I0923 04:42:09.633023   22763 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:42:09.633054   22763 main.go:141] libmachine: Decoding PEM data...
	I0923 04:42:09.633067   22763 main.go:141] libmachine: Parsing certificate...
	I0923 04:42:09.633467   22763 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:42:09.800627   22763 main.go:141] libmachine: Creating SSH key...
	I0923 04:42:09.853055   22763 main.go:141] libmachine: Creating Disk image...
	I0923 04:42:09.853084   22763 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:42:09.853336   22763 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/no-preload-836000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/no-preload-836000/disk.qcow2
	I0923 04:42:09.863271   22763 main.go:141] libmachine: STDOUT: 
	I0923 04:42:09.863291   22763 main.go:141] libmachine: STDERR: 
	I0923 04:42:09.863366   22763 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/no-preload-836000/disk.qcow2 +20000M
	I0923 04:42:09.872289   22763 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:42:09.872307   22763 main.go:141] libmachine: STDERR: 
	I0923 04:42:09.872325   22763 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/no-preload-836000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/no-preload-836000/disk.qcow2
	I0923 04:42:09.872331   22763 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:42:09.872345   22763 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:42:09.872372   22763 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/no-preload-836000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/no-preload-836000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/no-preload-836000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:5f:7e:dc:fd:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/no-preload-836000/disk.qcow2
	I0923 04:42:09.874169   22763 main.go:141] libmachine: STDOUT: 
	I0923 04:42:09.874185   22763 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:42:09.874202   22763 client.go:171] duration metric: took 241.469708ms to LocalClient.Create
	I0923 04:42:09.977595   22763 cache.go:162] opening:  /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0923 04:42:09.998290   22763 cache.go:162] opening:  /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1
	I0923 04:42:10.015706   22763 cache.go:162] opening:  /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0923 04:42:10.031493   22763 cache.go:162] opening:  /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1
	I0923 04:42:10.044475   22763 cache.go:162] opening:  /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I0923 04:42:10.072337   22763 cache.go:162] opening:  /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0923 04:42:10.094538   22763 cache.go:162] opening:  /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1
	I0923 04:42:10.143241   22763 cache.go:157] /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0923 04:42:10.143277   22763 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 538.110541ms
	I0923 04:42:10.143295   22763 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0923 04:42:11.874424   22763 start.go:128] duration metric: took 2.268845375s to createHost
	I0923 04:42:11.874480   22763 start.go:83] releasing machines lock for "no-preload-836000", held for 2.269042625s
	W0923 04:42:11.874544   22763 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:42:11.892288   22763 out.go:177] * Deleting "no-preload-836000" in qemu2 ...
	W0923 04:42:11.922111   22763 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:42:11.922137   22763 start.go:729] Will try again in 5 seconds ...
	I0923 04:42:12.456962   22763 cache.go:157] /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0923 04:42:12.457017   22763 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 2.851815958s
	I0923 04:42:12.457053   22763 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0923 04:42:12.544273   22763 cache.go:157] /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0923 04:42:12.544313   22763 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 2.939057542s
	I0923 04:42:12.544354   22763 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0923 04:42:13.740144   22763 cache.go:157] /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0923 04:42:13.740191   22763 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 4.134951542s
	I0923 04:42:13.740246   22763 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0923 04:42:14.048672   22763 cache.go:157] /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0923 04:42:14.048725   22763 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 4.443716458s
	I0923 04:42:14.048755   22763 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0923 04:42:15.028010   22763 cache.go:157] /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0923 04:42:15.028071   22763 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 5.423067959s
	I0923 04:42:15.028102   22763 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0923 04:42:16.654707   22763 cache.go:157] /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0923 04:42:16.654759   22763 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 7.049656708s
	I0923 04:42:16.654786   22763 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0923 04:42:16.654816   22763 cache.go:87] Successfully saved all images to host disk.
	I0923 04:42:16.924287   22763 start.go:360] acquireMachinesLock for no-preload-836000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:42:16.924614   22763 start.go:364] duration metric: took 274µs to acquireMachinesLock for "no-preload-836000"
	I0923 04:42:16.924711   22763 start.go:93] Provisioning new machine with config: &{Name:no-preload-836000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-836000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:42:16.924992   22763 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:42:16.934453   22763 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 04:42:16.985321   22763 start.go:159] libmachine.API.Create for "no-preload-836000" (driver="qemu2")
	I0923 04:42:16.985372   22763 client.go:168] LocalClient.Create starting
	I0923 04:42:16.985490   22763 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:42:16.985560   22763 main.go:141] libmachine: Decoding PEM data...
	I0923 04:42:16.985584   22763 main.go:141] libmachine: Parsing certificate...
	I0923 04:42:16.985662   22763 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:42:16.985707   22763 main.go:141] libmachine: Decoding PEM data...
	I0923 04:42:16.985725   22763 main.go:141] libmachine: Parsing certificate...
	I0923 04:42:16.986245   22763 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:42:17.167899   22763 main.go:141] libmachine: Creating SSH key...
	I0923 04:42:17.204052   22763 main.go:141] libmachine: Creating Disk image...
	I0923 04:42:17.204057   22763 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:42:17.204249   22763 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/no-preload-836000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/no-preload-836000/disk.qcow2
	I0923 04:42:17.213585   22763 main.go:141] libmachine: STDOUT: 
	I0923 04:42:17.213604   22763 main.go:141] libmachine: STDERR: 
	I0923 04:42:17.213675   22763 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/no-preload-836000/disk.qcow2 +20000M
	I0923 04:42:17.221640   22763 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:42:17.221653   22763 main.go:141] libmachine: STDERR: 
	I0923 04:42:17.221670   22763 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/no-preload-836000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/no-preload-836000/disk.qcow2
	I0923 04:42:17.221677   22763 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:42:17.221686   22763 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:42:17.221720   22763 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/no-preload-836000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/no-preload-836000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/no-preload-836000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:32:56:50:67:7a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/no-preload-836000/disk.qcow2
	I0923 04:42:17.223473   22763 main.go:141] libmachine: STDOUT: 
	I0923 04:42:17.223490   22763 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:42:17.223503   22763 client.go:171] duration metric: took 238.125042ms to LocalClient.Create
	I0923 04:42:19.225753   22763 start.go:128] duration metric: took 2.300726875s to createHost
	I0923 04:42:19.225830   22763 start.go:83] releasing machines lock for "no-preload-836000", held for 2.301206208s
	W0923 04:42:19.226239   22763 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-836000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-836000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:42:19.238779   22763 out.go:201] 
	W0923 04:42:19.242899   22763 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:42:19.243081   22763 out.go:270] * 
	* 
	W0923 04:42:19.245692   22763 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:42:19.256809   22763 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-836000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-836000 -n no-preload-836000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-836000 -n no-preload-836000: exit status 7 (66.831583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-836000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-836000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-836000 create -f testdata/busybox.yaml: exit status 1 (29.426875ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-836000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-836000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-836000 -n no-preload-836000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-836000 -n no-preload-836000: exit status 7 (31.727458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-836000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-836000 -n no-preload-836000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-836000 -n no-preload-836000: exit status 7 (31.013959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-836000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-836000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-836000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-836000 describe deploy/metrics-server -n kube-system: exit status 1 (27.111292ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-836000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-836000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-836000 -n no-preload-836000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-836000 -n no-preload-836000: exit status 7 (31.047667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-836000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-836000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-836000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.179483833s)

                                                
                                                
-- stdout --
	* [no-preload-836000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-836000" primary control-plane node in "no-preload-836000" cluster
	* Restarting existing qemu2 VM for "no-preload-836000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-836000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:42:23.314110   22839 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:42:23.314261   22839 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:42:23.314264   22839 out.go:358] Setting ErrFile to fd 2...
	I0923 04:42:23.314266   22839 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:42:23.314379   22839 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:42:23.315334   22839 out.go:352] Setting JSON to false
	I0923 04:42:23.331480   22839 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9714,"bootTime":1727082029,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:42:23.331549   22839 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:42:23.335002   22839 out.go:177] * [no-preload-836000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:42:23.341984   22839 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 04:42:23.342070   22839 notify.go:220] Checking for updates...
	I0923 04:42:23.349985   22839 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:42:23.352911   22839 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:42:23.355929   22839 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:42:23.358950   22839 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	I0923 04:42:23.360415   22839 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 04:42:23.364231   22839 config.go:182] Loaded profile config "no-preload-836000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:42:23.364527   22839 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:42:23.368924   22839 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 04:42:23.374932   22839 start.go:297] selected driver: qemu2
	I0923 04:42:23.374938   22839 start.go:901] validating driver "qemu2" against &{Name:no-preload-836000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:no-preload-836000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:42:23.375002   22839 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 04:42:23.377289   22839 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 04:42:23.377318   22839 cni.go:84] Creating CNI manager for ""
	I0923 04:42:23.377338   22839 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 04:42:23.377361   22839 start.go:340] cluster config:
	{Name:no-preload-836000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-836000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:42:23.381066   22839 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:42:23.388897   22839 out.go:177] * Starting "no-preload-836000" primary control-plane node in "no-preload-836000" cluster
	I0923 04:42:23.392916   22839 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 04:42:23.392997   22839 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/no-preload-836000/config.json ...
	I0923 04:42:23.393029   22839 cache.go:107] acquiring lock: {Name:mk149f78b192b6198ebee9e7840058ae5a096258 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:42:23.393025   22839 cache.go:107] acquiring lock: {Name:mk12ffd255a263dbbb1b963a6e29b44678e5a8a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:42:23.393098   22839 cache.go:115] /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0923 04:42:23.393105   22839 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 77.458µs
	I0923 04:42:23.393104   22839 cache.go:115] /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0923 04:42:23.393114   22839 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0923 04:42:23.393107   22839 cache.go:107] acquiring lock: {Name:mk5c1b72e897512babb874e0511d668fd4f4a8e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:42:23.393117   22839 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 101.083µs
	I0923 04:42:23.393121   22839 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0923 04:42:23.393120   22839 cache.go:107] acquiring lock: {Name:mk76c4e7914c46a1315b164693ffbce9cdd86022 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:42:23.393127   22839 cache.go:107] acquiring lock: {Name:mk7eb1f224df61a83d02e0bdef0f40de9af9c1ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:42:23.393143   22839 cache.go:107] acquiring lock: {Name:mk3f163ffe8db89396e94c63a72c105d3446f50e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:42:23.393159   22839 cache.go:115] /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0923 04:42:23.393165   22839 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 45.042µs
	I0923 04:42:23.393168   22839 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0923 04:42:23.393197   22839 cache.go:115] /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0923 04:42:23.393204   22839 cache.go:115] /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0923 04:42:23.393175   22839 cache.go:107] acquiring lock: {Name:mkf126eb811ef6c315bcd595d36e422ebece1bbf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:42:23.393208   22839 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 146.875µs
	I0923 04:42:23.393212   22839 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0923 04:42:23.393207   22839 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 64.916µs
	I0923 04:42:23.393207   22839 cache.go:115] /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0923 04:42:23.393176   22839 cache.go:107] acquiring lock: {Name:mk8a2a1707ab33ec2f0db2209c5df806d3e4956b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:42:23.393233   22839 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 105.625µs
	I0923 04:42:23.393251   22839 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0923 04:42:23.393219   22839 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0923 04:42:23.393258   22839 cache.go:115] /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0923 04:42:23.393264   22839 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 132.916µs
	I0923 04:42:23.393271   22839 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0923 04:42:23.393276   22839 cache.go:115] /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0923 04:42:23.393281   22839 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 120.083µs
	I0923 04:42:23.393284   22839 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0923 04:42:23.393289   22839 cache.go:87] Successfully saved all images to host disk.
	I0923 04:42:23.393461   22839 start.go:360] acquireMachinesLock for no-preload-836000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:42:23.393495   22839 start.go:364] duration metric: took 23.417µs to acquireMachinesLock for "no-preload-836000"
	I0923 04:42:23.393504   22839 start.go:96] Skipping create...Using existing machine configuration
	I0923 04:42:23.393509   22839 fix.go:54] fixHost starting: 
	I0923 04:42:23.393648   22839 fix.go:112] recreateIfNeeded on no-preload-836000: state=Stopped err=<nil>
	W0923 04:42:23.393660   22839 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 04:42:23.401913   22839 out.go:177] * Restarting existing qemu2 VM for "no-preload-836000" ...
	I0923 04:42:23.405931   22839 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:42:23.405975   22839 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/no-preload-836000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/no-preload-836000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/no-preload-836000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:32:56:50:67:7a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/no-preload-836000/disk.qcow2
	I0923 04:42:23.408130   22839 main.go:141] libmachine: STDOUT: 
	I0923 04:42:23.408152   22839 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:42:23.408181   22839 fix.go:56] duration metric: took 14.669625ms for fixHost
	I0923 04:42:23.408186   22839 start.go:83] releasing machines lock for "no-preload-836000", held for 14.687209ms
	W0923 04:42:23.408194   22839 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:42:23.408227   22839 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:42:23.408232   22839 start.go:729] Will try again in 5 seconds ...
	I0923 04:42:28.410478   22839 start.go:360] acquireMachinesLock for no-preload-836000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:42:28.410840   22839 start.go:364] duration metric: took 299.083µs to acquireMachinesLock for "no-preload-836000"
	I0923 04:42:28.410970   22839 start.go:96] Skipping create...Using existing machine configuration
	I0923 04:42:28.410986   22839 fix.go:54] fixHost starting: 
	I0923 04:42:28.411686   22839 fix.go:112] recreateIfNeeded on no-preload-836000: state=Stopped err=<nil>
	W0923 04:42:28.411711   22839 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 04:42:28.415174   22839 out.go:177] * Restarting existing qemu2 VM for "no-preload-836000" ...
	I0923 04:42:28.421144   22839 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:42:28.421462   22839 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/no-preload-836000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/no-preload-836000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/no-preload-836000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:32:56:50:67:7a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/no-preload-836000/disk.qcow2
	I0923 04:42:28.430501   22839 main.go:141] libmachine: STDOUT: 
	I0923 04:42:28.430582   22839 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:42:28.430664   22839 fix.go:56] duration metric: took 19.672458ms for fixHost
	I0923 04:42:28.430688   22839 start.go:83] releasing machines lock for "no-preload-836000", held for 19.825ms
	W0923 04:42:28.430874   22839 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-836000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-836000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:42:28.438027   22839 out.go:201] 
	W0923 04:42:28.441188   22839 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:42:28.441216   22839 out.go:270] * 
	* 
	W0923 04:42:28.443883   22839 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:42:28.456056   22839 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-836000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-836000 -n no-preload-836000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-836000 -n no-preload-836000: exit status 7 (71.424667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-836000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-836000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-836000 -n no-preload-836000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-836000 -n no-preload-836000: exit status 7 (34.016292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-836000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-836000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-836000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-836000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.153833ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-836000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-836000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-836000 -n no-preload-836000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-836000 -n no-preload-836000: exit status 7 (31.1375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-836000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-836000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-836000 -n no-preload-836000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-836000 -n no-preload-836000: exit status 7 (30.964833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-836000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-836000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-836000 --alsologtostderr -v=1: exit status 83 (43.524625ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-836000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-836000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:42:28.732267   22858 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:42:28.732433   22858 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:42:28.732436   22858 out.go:358] Setting ErrFile to fd 2...
	I0923 04:42:28.732438   22858 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:42:28.732579   22858 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:42:28.732808   22858 out.go:352] Setting JSON to false
	I0923 04:42:28.732815   22858 mustload.go:65] Loading cluster: no-preload-836000
	I0923 04:42:28.733037   22858 config.go:182] Loaded profile config "no-preload-836000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:42:28.737218   22858 out.go:177] * The control-plane node no-preload-836000 host is not running: state=Stopped
	I0923 04:42:28.741103   22858 out.go:177]   To start a cluster, run: "minikube start -p no-preload-836000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-836000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-836000 -n no-preload-836000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-836000 -n no-preload-836000: exit status 7 (31.093583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-836000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-836000 -n no-preload-836000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-836000 -n no-preload-836000: exit status 7 (30.461292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-836000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-946000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-946000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.930638291s)

                                                
                                                
-- stdout --
	* [embed-certs-946000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-946000" primary control-plane node in "embed-certs-946000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-946000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:42:29.054161   22875 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:42:29.054309   22875 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:42:29.054312   22875 out.go:358] Setting ErrFile to fd 2...
	I0923 04:42:29.054315   22875 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:42:29.054441   22875 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:42:29.055504   22875 out.go:352] Setting JSON to false
	I0923 04:42:29.071681   22875 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9720,"bootTime":1727082029,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:42:29.071765   22875 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:42:29.076207   22875 out.go:177] * [embed-certs-946000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:42:29.083115   22875 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 04:42:29.083215   22875 notify.go:220] Checking for updates...
	I0923 04:42:29.091116   22875 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:42:29.094108   22875 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:42:29.097114   22875 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:42:29.100134   22875 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	I0923 04:42:29.103120   22875 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 04:42:29.106428   22875 config.go:182] Loaded profile config "cert-expiration-265000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:42:29.106498   22875 config.go:182] Loaded profile config "multinode-090000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:42:29.106547   22875 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:42:29.110122   22875 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 04:42:29.117079   22875 start.go:297] selected driver: qemu2
	I0923 04:42:29.117084   22875 start.go:901] validating driver "qemu2" against <nil>
	I0923 04:42:29.117096   22875 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 04:42:29.119502   22875 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 04:42:29.121181   22875 out.go:177] * Automatically selected the socket_vmnet network
	I0923 04:42:29.124238   22875 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 04:42:29.124267   22875 cni.go:84] Creating CNI manager for ""
	I0923 04:42:29.124295   22875 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 04:42:29.124302   22875 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 04:42:29.124330   22875 start.go:340] cluster config:
	{Name:embed-certs-946000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-946000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:42:29.128158   22875 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:42:29.136104   22875 out.go:177] * Starting "embed-certs-946000" primary control-plane node in "embed-certs-946000" cluster
	I0923 04:42:29.140088   22875 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 04:42:29.140104   22875 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 04:42:29.140112   22875 cache.go:56] Caching tarball of preloaded images
	I0923 04:42:29.140180   22875 preload.go:172] Found /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 04:42:29.140187   22875 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 04:42:29.140258   22875 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/embed-certs-946000/config.json ...
	I0923 04:42:29.140269   22875 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/embed-certs-946000/config.json: {Name:mk0384768f88db5676068f59d8e62a1fc2ad1e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 04:42:29.140510   22875 start.go:360] acquireMachinesLock for embed-certs-946000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:42:29.140543   22875 start.go:364] duration metric: took 26.833µs to acquireMachinesLock for "embed-certs-946000"
	I0923 04:42:29.140556   22875 start.go:93] Provisioning new machine with config: &{Name:embed-certs-946000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-946000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:42:29.140580   22875 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:42:29.149093   22875 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 04:42:29.167428   22875 start.go:159] libmachine.API.Create for "embed-certs-946000" (driver="qemu2")
	I0923 04:42:29.167458   22875 client.go:168] LocalClient.Create starting
	I0923 04:42:29.167520   22875 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:42:29.167552   22875 main.go:141] libmachine: Decoding PEM data...
	I0923 04:42:29.167561   22875 main.go:141] libmachine: Parsing certificate...
	I0923 04:42:29.167598   22875 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:42:29.167622   22875 main.go:141] libmachine: Decoding PEM data...
	I0923 04:42:29.167637   22875 main.go:141] libmachine: Parsing certificate...
	I0923 04:42:29.168000   22875 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:42:29.333194   22875 main.go:141] libmachine: Creating SSH key...
	I0923 04:42:29.478681   22875 main.go:141] libmachine: Creating Disk image...
	I0923 04:42:29.478688   22875 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:42:29.478938   22875 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/embed-certs-946000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/embed-certs-946000/disk.qcow2
	I0923 04:42:29.488629   22875 main.go:141] libmachine: STDOUT: 
	I0923 04:42:29.488651   22875 main.go:141] libmachine: STDERR: 
	I0923 04:42:29.488707   22875 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/embed-certs-946000/disk.qcow2 +20000M
	I0923 04:42:29.496739   22875 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:42:29.496751   22875 main.go:141] libmachine: STDERR: 
	I0923 04:42:29.496768   22875 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/embed-certs-946000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/embed-certs-946000/disk.qcow2
	I0923 04:42:29.496774   22875 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:42:29.496786   22875 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:42:29.496814   22875 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/embed-certs-946000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/embed-certs-946000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/embed-certs-946000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:97:5b:b0:96:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/embed-certs-946000/disk.qcow2
	I0923 04:42:29.498553   22875 main.go:141] libmachine: STDOUT: 
	I0923 04:42:29.498566   22875 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:42:29.498593   22875 client.go:171] duration metric: took 331.121375ms to LocalClient.Create
	I0923 04:42:31.500764   22875 start.go:128] duration metric: took 2.360172667s to createHost
	I0923 04:42:31.500878   22875 start.go:83] releasing machines lock for "embed-certs-946000", held for 2.360294042s
	W0923 04:42:31.500944   22875 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:42:31.512331   22875 out.go:177] * Deleting "embed-certs-946000" in qemu2 ...
	W0923 04:42:31.542068   22875 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:42:31.542089   22875 start.go:729] Will try again in 5 seconds ...
	I0923 04:42:36.544339   22875 start.go:360] acquireMachinesLock for embed-certs-946000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:42:36.544784   22875 start.go:364] duration metric: took 335.458µs to acquireMachinesLock for "embed-certs-946000"
	I0923 04:42:36.544908   22875 start.go:93] Provisioning new machine with config: &{Name:embed-certs-946000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-946000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:42:36.545239   22875 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:42:36.555901   22875 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 04:42:36.605351   22875 start.go:159] libmachine.API.Create for "embed-certs-946000" (driver="qemu2")
	I0923 04:42:36.605414   22875 client.go:168] LocalClient.Create starting
	I0923 04:42:36.605534   22875 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:42:36.605609   22875 main.go:141] libmachine: Decoding PEM data...
	I0923 04:42:36.605628   22875 main.go:141] libmachine: Parsing certificate...
	I0923 04:42:36.605705   22875 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:42:36.605751   22875 main.go:141] libmachine: Decoding PEM data...
	I0923 04:42:36.605772   22875 main.go:141] libmachine: Parsing certificate...
	I0923 04:42:36.606395   22875 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:42:36.778726   22875 main.go:141] libmachine: Creating SSH key...
	I0923 04:42:36.891988   22875 main.go:141] libmachine: Creating Disk image...
	I0923 04:42:36.891994   22875 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:42:36.892213   22875 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/embed-certs-946000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/embed-certs-946000/disk.qcow2
	I0923 04:42:36.901614   22875 main.go:141] libmachine: STDOUT: 
	I0923 04:42:36.901629   22875 main.go:141] libmachine: STDERR: 
	I0923 04:42:36.901698   22875 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/embed-certs-946000/disk.qcow2 +20000M
	I0923 04:42:36.909629   22875 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:42:36.909646   22875 main.go:141] libmachine: STDERR: 
	I0923 04:42:36.909656   22875 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/embed-certs-946000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/embed-certs-946000/disk.qcow2
	I0923 04:42:36.909662   22875 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:42:36.909671   22875 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:42:36.909701   22875 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/embed-certs-946000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/embed-certs-946000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/embed-certs-946000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:cb:33:79:ce:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/embed-certs-946000/disk.qcow2
	I0923 04:42:36.911356   22875 main.go:141] libmachine: STDOUT: 
	I0923 04:42:36.911371   22875 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:42:36.911388   22875 client.go:171] duration metric: took 305.968959ms to LocalClient.Create
	I0923 04:42:38.913624   22875 start.go:128] duration metric: took 2.368368s to createHost
	I0923 04:42:38.913679   22875 start.go:83] releasing machines lock for "embed-certs-946000", held for 2.368882458s
	W0923 04:42:38.914128   22875 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-946000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-946000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:42:38.929034   22875 out.go:201] 
	W0923 04:42:38.932588   22875 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:42:38.932615   22875 out.go:270] * 
	* 
	W0923 04:42:38.934572   22875 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:42:38.943520   22875 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-946000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-946000 -n embed-certs-946000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-946000 -n embed-certs-946000: exit status 7 (66.787458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-946000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-946000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-946000 create -f testdata/busybox.yaml: exit status 1 (28.519ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-946000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-946000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-946000 -n embed-certs-946000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-946000 -n embed-certs-946000: exit status 7 (31.323ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-946000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-946000 -n embed-certs-946000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-946000 -n embed-certs-946000: exit status 7 (30.926542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-946000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-946000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-946000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-946000 describe deploy/metrics-server -n kube-system: exit status 1 (26.462792ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-946000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-946000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-946000 -n embed-certs-946000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-946000 -n embed-certs-946000: exit status 7 (30.865208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-946000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-946000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-946000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.169376125s)

                                                
                                                
-- stdout --
	* [embed-certs-946000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-946000" primary control-plane node in "embed-certs-946000" cluster
	* Restarting existing qemu2 VM for "embed-certs-946000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-946000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:42:42.174455   22933 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:42:42.174587   22933 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:42:42.174591   22933 out.go:358] Setting ErrFile to fd 2...
	I0923 04:42:42.174593   22933 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:42:42.174741   22933 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:42:42.175745   22933 out.go:352] Setting JSON to false
	I0923 04:42:42.191672   22933 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9733,"bootTime":1727082029,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:42:42.191753   22933 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:42:42.196440   22933 out.go:177] * [embed-certs-946000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:42:42.204455   22933 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 04:42:42.204504   22933 notify.go:220] Checking for updates...
	I0923 04:42:42.210362   22933 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:42:42.213369   22933 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:42:42.216438   22933 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:42:42.217847   22933 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	I0923 04:42:42.221344   22933 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 04:42:42.224687   22933 config.go:182] Loaded profile config "embed-certs-946000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:42:42.224952   22933 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:42:42.229195   22933 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 04:42:42.236428   22933 start.go:297] selected driver: qemu2
	I0923 04:42:42.236433   22933 start.go:901] validating driver "qemu2" against &{Name:embed-certs-946000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:embed-certs-946000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:42:42.236495   22933 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 04:42:42.238686   22933 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 04:42:42.238709   22933 cni.go:84] Creating CNI manager for ""
	I0923 04:42:42.238737   22933 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 04:42:42.238764   22933 start.go:340] cluster config:
	{Name:embed-certs-946000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-946000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:42:42.242165   22933 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:42:42.250345   22933 out.go:177] * Starting "embed-certs-946000" primary control-plane node in "embed-certs-946000" cluster
	I0923 04:42:42.254420   22933 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 04:42:42.254442   22933 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 04:42:42.254454   22933 cache.go:56] Caching tarball of preloaded images
	I0923 04:42:42.254521   22933 preload.go:172] Found /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 04:42:42.254528   22933 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 04:42:42.254598   22933 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/embed-certs-946000/config.json ...
	I0923 04:42:42.255087   22933 start.go:360] acquireMachinesLock for embed-certs-946000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:42:42.255117   22933 start.go:364] duration metric: took 23.625µs to acquireMachinesLock for "embed-certs-946000"
	I0923 04:42:42.255127   22933 start.go:96] Skipping create...Using existing machine configuration
	I0923 04:42:42.255132   22933 fix.go:54] fixHost starting: 
	I0923 04:42:42.255251   22933 fix.go:112] recreateIfNeeded on embed-certs-946000: state=Stopped err=<nil>
	W0923 04:42:42.255260   22933 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 04:42:42.263397   22933 out.go:177] * Restarting existing qemu2 VM for "embed-certs-946000" ...
	I0923 04:42:42.267366   22933 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:42:42.267400   22933 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/embed-certs-946000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/embed-certs-946000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/embed-certs-946000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:cb:33:79:ce:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/embed-certs-946000/disk.qcow2
	I0923 04:42:42.269412   22933 main.go:141] libmachine: STDOUT: 
	I0923 04:42:42.269432   22933 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:42:42.269463   22933 fix.go:56] duration metric: took 14.329625ms for fixHost
	I0923 04:42:42.269469   22933 start.go:83] releasing machines lock for "embed-certs-946000", held for 14.347ms
	W0923 04:42:42.269475   22933 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:42:42.269506   22933 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:42:42.269511   22933 start.go:729] Will try again in 5 seconds ...
	I0923 04:42:47.270205   22933 start.go:360] acquireMachinesLock for embed-certs-946000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:42:47.270285   22933 start.go:364] duration metric: took 59.458µs to acquireMachinesLock for "embed-certs-946000"
	I0923 04:42:47.270301   22933 start.go:96] Skipping create...Using existing machine configuration
	I0923 04:42:47.270305   22933 fix.go:54] fixHost starting: 
	I0923 04:42:47.270443   22933 fix.go:112] recreateIfNeeded on embed-certs-946000: state=Stopped err=<nil>
	W0923 04:42:47.270449   22933 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 04:42:47.274902   22933 out.go:177] * Restarting existing qemu2 VM for "embed-certs-946000" ...
	I0923 04:42:47.283845   22933 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:42:47.283897   22933 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/embed-certs-946000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/embed-certs-946000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/embed-certs-946000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:cb:33:79:ce:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/embed-certs-946000/disk.qcow2
	I0923 04:42:47.285811   22933 main.go:141] libmachine: STDOUT: 
	I0923 04:42:47.285826   22933 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:42:47.285848   22933 fix.go:56] duration metric: took 15.543459ms for fixHost
	I0923 04:42:47.285854   22933 start.go:83] releasing machines lock for "embed-certs-946000", held for 15.564208ms
	W0923 04:42:47.285909   22933 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-946000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-946000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:42:47.293828   22933 out.go:201] 
	W0923 04:42:47.297954   22933 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:42:47.297960   22933 out.go:270] * 
	* 
	W0923 04:42:47.298454   22933 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:42:47.311879   22933 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-946000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-946000 -n embed-certs-946000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-946000 -n embed-certs-946000: exit status 7 (31.532042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-946000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-953000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-953000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.901011125s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-953000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-953000" primary control-plane node in "default-k8s-diff-port-953000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-953000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:42:47.374606   22956 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:42:47.374736   22956 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:42:47.374739   22956 out.go:358] Setting ErrFile to fd 2...
	I0923 04:42:47.374741   22956 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:42:47.374873   22956 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:42:47.376022   22956 out.go:352] Setting JSON to false
	I0923 04:42:47.393715   22956 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9738,"bootTime":1727082029,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:42:47.393802   22956 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:42:47.397879   22956 out.go:177] * [default-k8s-diff-port-953000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:42:47.404857   22956 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 04:42:47.404896   22956 notify.go:220] Checking for updates...
	I0923 04:42:47.412857   22956 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:42:47.416882   22956 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:42:47.419852   22956 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:42:47.422851   22956 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	I0923 04:42:47.425834   22956 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 04:42:47.430218   22956 config.go:182] Loaded profile config "embed-certs-946000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:42:47.430280   22956 config.go:182] Loaded profile config "multinode-090000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:42:47.430332   22956 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:42:47.433795   22956 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 04:42:47.442908   22956 start.go:297] selected driver: qemu2
	I0923 04:42:47.442914   22956 start.go:901] validating driver "qemu2" against <nil>
	I0923 04:42:47.442921   22956 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 04:42:47.445204   22956 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 04:42:47.448866   22956 out.go:177] * Automatically selected the socket_vmnet network
	I0923 04:42:47.451970   22956 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 04:42:47.451987   22956 cni.go:84] Creating CNI manager for ""
	I0923 04:42:47.452008   22956 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 04:42:47.452015   22956 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 04:42:47.452053   22956 start.go:340] cluster config:
	{Name:default-k8s-diff-port-953000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-953000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:42:47.456077   22956 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:42:47.462836   22956 out.go:177] * Starting "default-k8s-diff-port-953000" primary control-plane node in "default-k8s-diff-port-953000" cluster
	I0923 04:42:47.466873   22956 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 04:42:47.466898   22956 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 04:42:47.466906   22956 cache.go:56] Caching tarball of preloaded images
	I0923 04:42:47.466988   22956 preload.go:172] Found /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 04:42:47.466995   22956 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 04:42:47.467059   22956 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/default-k8s-diff-port-953000/config.json ...
	I0923 04:42:47.467069   22956 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/default-k8s-diff-port-953000/config.json: {Name:mk2c76f10a3e6475109548701718fece4a986b29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 04:42:47.467383   22956 start.go:360] acquireMachinesLock for default-k8s-diff-port-953000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:42:47.467416   22956 start.go:364] duration metric: took 24.792µs to acquireMachinesLock for "default-k8s-diff-port-953000"
	I0923 04:42:47.467429   22956 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-953000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-953000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:42:47.467454   22956 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:42:47.471892   22956 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 04:42:47.487593   22956 start.go:159] libmachine.API.Create for "default-k8s-diff-port-953000" (driver="qemu2")
	I0923 04:42:47.487621   22956 client.go:168] LocalClient.Create starting
	I0923 04:42:47.487679   22956 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:42:47.487713   22956 main.go:141] libmachine: Decoding PEM data...
	I0923 04:42:47.487723   22956 main.go:141] libmachine: Parsing certificate...
	I0923 04:42:47.487768   22956 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:42:47.487792   22956 main.go:141] libmachine: Decoding PEM data...
	I0923 04:42:47.487799   22956 main.go:141] libmachine: Parsing certificate...
	I0923 04:42:47.488131   22956 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:42:47.680811   22956 main.go:141] libmachine: Creating SSH key...
	I0923 04:42:47.807807   22956 main.go:141] libmachine: Creating Disk image...
	I0923 04:42:47.807818   22956 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:42:47.807975   22956 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/default-k8s-diff-port-953000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/default-k8s-diff-port-953000/disk.qcow2
	I0923 04:42:47.817536   22956 main.go:141] libmachine: STDOUT: 
	I0923 04:42:47.817562   22956 main.go:141] libmachine: STDERR: 
	I0923 04:42:47.817634   22956 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/default-k8s-diff-port-953000/disk.qcow2 +20000M
	I0923 04:42:47.826543   22956 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:42:47.826560   22956 main.go:141] libmachine: STDERR: 
	I0923 04:42:47.826575   22956 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/default-k8s-diff-port-953000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/default-k8s-diff-port-953000/disk.qcow2
	I0923 04:42:47.826581   22956 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:42:47.826594   22956 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:42:47.826627   22956 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/default-k8s-diff-port-953000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/default-k8s-diff-port-953000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/default-k8s-diff-port-953000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:c7:f1:b4:9f:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/default-k8s-diff-port-953000/disk.qcow2
	I0923 04:42:47.828210   22956 main.go:141] libmachine: STDOUT: 
	I0923 04:42:47.828225   22956 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:42:47.828247   22956 client.go:171] duration metric: took 340.621791ms to LocalClient.Create
	I0923 04:42:49.830443   22956 start.go:128] duration metric: took 2.362976875s to createHost
	I0923 04:42:49.830571   22956 start.go:83] releasing machines lock for "default-k8s-diff-port-953000", held for 2.36315425s
	W0923 04:42:49.830631   22956 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:42:49.848710   22956 out.go:177] * Deleting "default-k8s-diff-port-953000" in qemu2 ...
	W0923 04:42:49.875909   22956 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:42:49.875939   22956 start.go:729] Will try again in 5 seconds ...
	I0923 04:42:54.878210   22956 start.go:360] acquireMachinesLock for default-k8s-diff-port-953000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:42:54.878744   22956 start.go:364] duration metric: took 411.166µs to acquireMachinesLock for "default-k8s-diff-port-953000"
	I0923 04:42:54.878901   22956 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-953000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-953000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:42:54.879220   22956 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:42:54.888789   22956 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 04:42:54.938394   22956 start.go:159] libmachine.API.Create for "default-k8s-diff-port-953000" (driver="qemu2")
	I0923 04:42:54.938451   22956 client.go:168] LocalClient.Create starting
	I0923 04:42:54.938581   22956 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:42:54.938650   22956 main.go:141] libmachine: Decoding PEM data...
	I0923 04:42:54.938667   22956 main.go:141] libmachine: Parsing certificate...
	I0923 04:42:54.938727   22956 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:42:54.938773   22956 main.go:141] libmachine: Decoding PEM data...
	I0923 04:42:54.938784   22956 main.go:141] libmachine: Parsing certificate...
	I0923 04:42:54.939339   22956 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:42:55.112270   22956 main.go:141] libmachine: Creating SSH key...
	I0923 04:42:55.171130   22956 main.go:141] libmachine: Creating Disk image...
	I0923 04:42:55.171136   22956 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:42:55.171354   22956 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/default-k8s-diff-port-953000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/default-k8s-diff-port-953000/disk.qcow2
	I0923 04:42:55.180536   22956 main.go:141] libmachine: STDOUT: 
	I0923 04:42:55.180556   22956 main.go:141] libmachine: STDERR: 
	I0923 04:42:55.180623   22956 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/default-k8s-diff-port-953000/disk.qcow2 +20000M
	I0923 04:42:55.188484   22956 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:42:55.188497   22956 main.go:141] libmachine: STDERR: 
	I0923 04:42:55.188511   22956 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/default-k8s-diff-port-953000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/default-k8s-diff-port-953000/disk.qcow2
	I0923 04:42:55.188515   22956 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:42:55.188525   22956 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:42:55.188551   22956 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/default-k8s-diff-port-953000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/default-k8s-diff-port-953000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/default-k8s-diff-port-953000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:3c:10:0c:57:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/default-k8s-diff-port-953000/disk.qcow2
	I0923 04:42:55.190244   22956 main.go:141] libmachine: STDOUT: 
	I0923 04:42:55.190261   22956 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:42:55.190274   22956 client.go:171] duration metric: took 251.817375ms to LocalClient.Create
	I0923 04:42:57.192440   22956 start.go:128] duration metric: took 2.3131975s to createHost
	I0923 04:42:57.192509   22956 start.go:83] releasing machines lock for "default-k8s-diff-port-953000", held for 2.313751208s
	W0923 04:42:57.193004   22956 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-953000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-953000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:42:57.212825   22956 out.go:201] 
	W0923 04:42:57.216796   22956 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:42:57.216829   22956 out.go:270] * 
	* 
	W0923 04:42:57.219348   22956 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:42:57.232699   22956 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-953000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-953000 -n default-k8s-diff-port-953000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-953000 -n default-k8s-diff-port-953000: exit status 7 (67.141375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-953000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-946000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-946000 -n embed-certs-946000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-946000 -n embed-certs-946000: exit status 7 (29.877667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-946000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-946000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-946000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-946000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.781542ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-946000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-946000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-946000 -n embed-certs-946000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-946000 -n embed-certs-946000: exit status 7 (35.908ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-946000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-946000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-946000 -n embed-certs-946000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-946000 -n embed-certs-946000: exit status 7 (33.68875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-946000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-946000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-946000 --alsologtostderr -v=1: exit status 83 (68.077166ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-946000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-946000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:42:47.552718   22972 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:42:47.552878   22972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:42:47.552882   22972 out.go:358] Setting ErrFile to fd 2...
	I0923 04:42:47.552884   22972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:42:47.553031   22972 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:42:47.553261   22972 out.go:352] Setting JSON to false
	I0923 04:42:47.553270   22972 mustload.go:65] Loading cluster: embed-certs-946000
	I0923 04:42:47.553529   22972 config.go:182] Loaded profile config "embed-certs-946000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:42:47.573015   22972 out.go:177] * The control-plane node embed-certs-946000 host is not running: state=Stopped
	I0923 04:42:47.581890   22972 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-946000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-946000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-946000 -n embed-certs-946000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-946000 -n embed-certs-946000: exit status 7 (34.669125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-946000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-946000 -n embed-certs-946000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-946000 -n embed-certs-946000: exit status 7 (34.0835ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-946000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (11.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-385000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-385000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (11.781062583s)

                                                
                                                
-- stdout --
	* [newest-cni-385000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-385000" primary control-plane node in "newest-cni-385000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-385000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:42:47.917638   22992 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:42:47.917773   22992 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:42:47.917777   22992 out.go:358] Setting ErrFile to fd 2...
	I0923 04:42:47.917779   22992 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:42:47.917913   22992 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:42:47.919032   22992 out.go:352] Setting JSON to false
	I0923 04:42:47.935019   22992 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9738,"bootTime":1727082029,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:42:47.935083   22992 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:42:47.939908   22992 out.go:177] * [newest-cni-385000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:42:47.944709   22992 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 04:42:47.944765   22992 notify.go:220] Checking for updates...
	I0923 04:42:47.951862   22992 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:42:47.953374   22992 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:42:47.957859   22992 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:42:47.960881   22992 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	I0923 04:42:47.962306   22992 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 04:42:47.966225   22992 config.go:182] Loaded profile config "default-k8s-diff-port-953000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:42:47.966284   22992 config.go:182] Loaded profile config "multinode-090000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:42:47.966337   22992 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:42:47.969817   22992 out.go:177] * Using the qemu2 driver based on user configuration
	I0923 04:42:47.974871   22992 start.go:297] selected driver: qemu2
	I0923 04:42:47.974881   22992 start.go:901] validating driver "qemu2" against <nil>
	I0923 04:42:47.974888   22992 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 04:42:47.977109   22992 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0923 04:42:47.977146   22992 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0923 04:42:47.983824   22992 out.go:177] * Automatically selected the socket_vmnet network
	I0923 04:42:47.986887   22992 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0923 04:42:47.986906   22992 cni.go:84] Creating CNI manager for ""
	I0923 04:42:47.986926   22992 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 04:42:47.986930   22992 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 04:42:47.986959   22992 start.go:340] cluster config:
	{Name:newest-cni-385000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-385000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:42:47.990447   22992 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:42:47.998829   22992 out.go:177] * Starting "newest-cni-385000" primary control-plane node in "newest-cni-385000" cluster
	I0923 04:42:48.002856   22992 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 04:42:48.002872   22992 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 04:42:48.002879   22992 cache.go:56] Caching tarball of preloaded images
	I0923 04:42:48.002952   22992 preload.go:172] Found /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 04:42:48.002958   22992 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 04:42:48.003016   22992 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/newest-cni-385000/config.json ...
	I0923 04:42:48.003028   22992 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/newest-cni-385000/config.json: {Name:mk43c11bcfe5ad1b6d6563214622e481e29ff9c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 04:42:48.003246   22992 start.go:360] acquireMachinesLock for newest-cni-385000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:42:49.830687   22992 start.go:364] duration metric: took 1.827424625s to acquireMachinesLock for "newest-cni-385000"
	I0923 04:42:49.830841   22992 start.go:93] Provisioning new machine with config: &{Name:newest-cni-385000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-385000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:42:49.831085   22992 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:42:49.840725   22992 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 04:42:49.891477   22992 start.go:159] libmachine.API.Create for "newest-cni-385000" (driver="qemu2")
	I0923 04:42:49.891542   22992 client.go:168] LocalClient.Create starting
	I0923 04:42:49.891691   22992 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:42:49.891750   22992 main.go:141] libmachine: Decoding PEM data...
	I0923 04:42:49.891769   22992 main.go:141] libmachine: Parsing certificate...
	I0923 04:42:49.891831   22992 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:42:49.891876   22992 main.go:141] libmachine: Decoding PEM data...
	I0923 04:42:49.891890   22992 main.go:141] libmachine: Parsing certificate...
	I0923 04:42:49.892573   22992 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:42:50.062840   22992 main.go:141] libmachine: Creating SSH key...
	I0923 04:42:50.222761   22992 main.go:141] libmachine: Creating Disk image...
	I0923 04:42:50.222767   22992 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:42:50.222997   22992 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/newest-cni-385000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/newest-cni-385000/disk.qcow2
	I0923 04:42:50.232649   22992 main.go:141] libmachine: STDOUT: 
	I0923 04:42:50.232672   22992 main.go:141] libmachine: STDERR: 
	I0923 04:42:50.232741   22992 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/newest-cni-385000/disk.qcow2 +20000M
	I0923 04:42:50.240697   22992 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:42:50.240711   22992 main.go:141] libmachine: STDERR: 
	I0923 04:42:50.240737   22992 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/newest-cni-385000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/newest-cni-385000/disk.qcow2
	I0923 04:42:50.240744   22992 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:42:50.240754   22992 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:42:50.240779   22992 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/newest-cni-385000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/newest-cni-385000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/newest-cni-385000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:85:b3:8b:0c:74 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/newest-cni-385000/disk.qcow2
	I0923 04:42:50.242469   22992 main.go:141] libmachine: STDOUT: 
	I0923 04:42:50.242482   22992 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:42:50.242502   22992 client.go:171] duration metric: took 350.953541ms to LocalClient.Create
	I0923 04:42:52.244669   22992 start.go:128] duration metric: took 2.413564667s to createHost
	I0923 04:42:52.244714   22992 start.go:83] releasing machines lock for "newest-cni-385000", held for 2.413993958s
	W0923 04:42:52.244779   22992 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:42:52.257970   22992 out.go:177] * Deleting "newest-cni-385000" in qemu2 ...
	W0923 04:42:52.299394   22992 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:42:52.299427   22992 start.go:729] Will try again in 5 seconds ...
	I0923 04:42:57.301771   22992 start.go:360] acquireMachinesLock for newest-cni-385000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:42:57.301940   22992 start.go:364] duration metric: took 127.833µs to acquireMachinesLock for "newest-cni-385000"
	I0923 04:42:57.301974   22992 start.go:93] Provisioning new machine with config: &{Name:newest-cni-385000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-385000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 04:42:57.302048   22992 start.go:125] createHost starting for "" (driver="qemu2")
	I0923 04:42:57.306371   22992 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 04:42:57.327689   22992 start.go:159] libmachine.API.Create for "newest-cni-385000" (driver="qemu2")
	I0923 04:42:57.327730   22992 client.go:168] LocalClient.Create starting
	I0923 04:42:57.327806   22992 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/ca.pem
	I0923 04:42:57.327835   22992 main.go:141] libmachine: Decoding PEM data...
	I0923 04:42:57.327847   22992 main.go:141] libmachine: Parsing certificate...
	I0923 04:42:57.327890   22992 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19690-18362/.minikube/certs/cert.pem
	I0923 04:42:57.327912   22992 main.go:141] libmachine: Decoding PEM data...
	I0923 04:42:57.327921   22992 main.go:141] libmachine: Parsing certificate...
	I0923 04:42:57.328265   22992 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso...
	I0923 04:42:57.527134   22992 main.go:141] libmachine: Creating SSH key...
	I0923 04:42:57.617196   22992 main.go:141] libmachine: Creating Disk image...
	I0923 04:42:57.617203   22992 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0923 04:42:57.617398   22992 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/newest-cni-385000/disk.qcow2.raw /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/newest-cni-385000/disk.qcow2
	I0923 04:42:57.626388   22992 main.go:141] libmachine: STDOUT: 
	I0923 04:42:57.626410   22992 main.go:141] libmachine: STDERR: 
	I0923 04:42:57.626475   22992 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/newest-cni-385000/disk.qcow2 +20000M
	I0923 04:42:57.634940   22992 main.go:141] libmachine: STDOUT: Image resized.
	
	I0923 04:42:57.634963   22992 main.go:141] libmachine: STDERR: 
	I0923 04:42:57.634976   22992 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/newest-cni-385000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/newest-cni-385000/disk.qcow2
	I0923 04:42:57.634982   22992 main.go:141] libmachine: Starting QEMU VM...
	I0923 04:42:57.634996   22992 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:42:57.635034   22992 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/newest-cni-385000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/newest-cni-385000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/newest-cni-385000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:93:80:c7:12:ba -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/newest-cni-385000/disk.qcow2
	I0923 04:42:57.636793   22992 main.go:141] libmachine: STDOUT: 
	I0923 04:42:57.636813   22992 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:42:57.636833   22992 client.go:171] duration metric: took 309.100083ms to LocalClient.Create
	I0923 04:42:59.637353   22992 start.go:128] duration metric: took 2.335287375s to createHost
	I0923 04:42:59.637423   22992 start.go:83] releasing machines lock for "newest-cni-385000", held for 2.335480542s
	W0923 04:42:59.637768   22992 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-385000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-385000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:42:59.641396   22992 out.go:201] 
	W0923 04:42:59.646898   22992 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:42:59.646929   22992 out.go:270] * 
	* 
	W0923 04:42:59.648606   22992 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:42:59.657360   22992 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-385000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-385000 -n newest-cni-385000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-385000 -n newest-cni-385000: exit status 7 (67.207208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-385000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (11.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-953000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-953000 create -f testdata/busybox.yaml: exit status 1 (32.077208ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-953000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-953000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-953000 -n default-k8s-diff-port-953000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-953000 -n default-k8s-diff-port-953000: exit status 7 (33.84125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-953000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-953000 -n default-k8s-diff-port-953000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-953000 -n default-k8s-diff-port-953000: exit status 7 (33.952166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-953000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-953000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-953000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-953000 describe deploy/metrics-server -n kube-system: exit status 1 (30.364625ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-953000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-953000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-953000 -n default-k8s-diff-port-953000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-953000 -n default-k8s-diff-port-953000: exit status 7 (31.075708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-953000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-953000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-953000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.18570525s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-953000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-953000" primary control-plane node in "default-k8s-diff-port-953000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-953000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-953000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:43:01.546769   23060 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:43:01.546908   23060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:43:01.546913   23060 out.go:358] Setting ErrFile to fd 2...
	I0923 04:43:01.546916   23060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:43:01.547045   23060 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:43:01.548048   23060 out.go:352] Setting JSON to false
	I0923 04:43:01.564054   23060 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9752,"bootTime":1727082029,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:43:01.564135   23060 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:43:01.569298   23060 out.go:177] * [default-k8s-diff-port-953000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:43:01.576241   23060 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 04:43:01.576287   23060 notify.go:220] Checking for updates...
	I0923 04:43:01.584240   23060 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:43:01.587266   23060 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:43:01.590225   23060 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:43:01.593186   23060 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	I0923 04:43:01.596215   23060 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 04:43:01.599488   23060 config.go:182] Loaded profile config "default-k8s-diff-port-953000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:43:01.599754   23060 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:43:01.603190   23060 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 04:43:01.610208   23060 start.go:297] selected driver: qemu2
	I0923 04:43:01.610214   23060 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-953000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-953000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:43:01.610278   23060 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 04:43:01.612829   23060 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 04:43:01.612865   23060 cni.go:84] Creating CNI manager for ""
	I0923 04:43:01.612887   23060 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 04:43:01.612909   23060 start.go:340] cluster config:
	{Name:default-k8s-diff-port-953000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-953000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:43:01.616470   23060 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:43:01.624206   23060 out.go:177] * Starting "default-k8s-diff-port-953000" primary control-plane node in "default-k8s-diff-port-953000" cluster
	I0923 04:43:01.628090   23060 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 04:43:01.628106   23060 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 04:43:01.628115   23060 cache.go:56] Caching tarball of preloaded images
	I0923 04:43:01.628190   23060 preload.go:172] Found /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 04:43:01.628196   23060 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 04:43:01.628258   23060 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/default-k8s-diff-port-953000/config.json ...
	I0923 04:43:01.628697   23060 start.go:360] acquireMachinesLock for default-k8s-diff-port-953000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:43:01.628725   23060 start.go:364] duration metric: took 22.042µs to acquireMachinesLock for "default-k8s-diff-port-953000"
	I0923 04:43:01.628734   23060 start.go:96] Skipping create...Using existing machine configuration
	I0923 04:43:01.628739   23060 fix.go:54] fixHost starting: 
	I0923 04:43:01.628859   23060 fix.go:112] recreateIfNeeded on default-k8s-diff-port-953000: state=Stopped err=<nil>
	W0923 04:43:01.628867   23060 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 04:43:01.632299   23060 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-953000" ...
	I0923 04:43:01.640225   23060 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:43:01.640261   23060 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/default-k8s-diff-port-953000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/default-k8s-diff-port-953000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/default-k8s-diff-port-953000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:3c:10:0c:57:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/default-k8s-diff-port-953000/disk.qcow2
	I0923 04:43:01.642222   23060 main.go:141] libmachine: STDOUT: 
	I0923 04:43:01.642249   23060 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:43:01.642279   23060 fix.go:56] duration metric: took 13.538333ms for fixHost
	I0923 04:43:01.642284   23060 start.go:83] releasing machines lock for "default-k8s-diff-port-953000", held for 13.554917ms
	W0923 04:43:01.642292   23060 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:43:01.642327   23060 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:43:01.642332   23060 start.go:729] Will try again in 5 seconds ...
	I0923 04:43:06.644538   23060 start.go:360] acquireMachinesLock for default-k8s-diff-port-953000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:43:06.645092   23060 start.go:364] duration metric: took 394.042µs to acquireMachinesLock for "default-k8s-diff-port-953000"
	I0923 04:43:06.645216   23060 start.go:96] Skipping create...Using existing machine configuration
	I0923 04:43:06.645241   23060 fix.go:54] fixHost starting: 
	I0923 04:43:06.646093   23060 fix.go:112] recreateIfNeeded on default-k8s-diff-port-953000: state=Stopped err=<nil>
	W0923 04:43:06.646129   23060 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 04:43:06.654667   23060 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-953000" ...
	I0923 04:43:06.657742   23060 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:43:06.657981   23060 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/default-k8s-diff-port-953000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/default-k8s-diff-port-953000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/default-k8s-diff-port-953000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:3c:10:0c:57:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/default-k8s-diff-port-953000/disk.qcow2
	I0923 04:43:06.666912   23060 main.go:141] libmachine: STDOUT: 
	I0923 04:43:06.666966   23060 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:43:06.667032   23060 fix.go:56] duration metric: took 21.796167ms for fixHost
	I0923 04:43:06.667052   23060 start.go:83] releasing machines lock for "default-k8s-diff-port-953000", held for 21.939417ms
	W0923 04:43:06.667264   23060 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-953000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-953000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:43:06.674719   23060 out.go:201] 
	W0923 04:43:06.678763   23060 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:43:06.678804   23060 out.go:270] * 
	* 
	W0923 04:43:06.681292   23060 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:43:06.689768   23060 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-953000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-953000 -n default-k8s-diff-port-953000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-953000 -n default-k8s-diff-port-953000: exit status 7 (66.143292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-953000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-385000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-385000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.184412417s)

                                                
                                                
-- stdout --
	* [newest-cni-385000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-385000" primary control-plane node in "newest-cni-385000" cluster
	* Restarting existing qemu2 VM for "newest-cni-385000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-385000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:43:01.971486   23076 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:43:01.971613   23076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:43:01.971616   23076 out.go:358] Setting ErrFile to fd 2...
	I0923 04:43:01.971618   23076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:43:01.971733   23076 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:43:01.972918   23076 out.go:352] Setting JSON to false
	I0923 04:43:01.988877   23076 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":9752,"bootTime":1727082029,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:43:01.988939   23076 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:43:01.993319   23076 out.go:177] * [newest-cni-385000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:43:02.001262   23076 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 04:43:02.001327   23076 notify.go:220] Checking for updates...
	I0923 04:43:02.008287   23076 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:43:02.011297   23076 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:43:02.014356   23076 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:43:02.017287   23076 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	I0923 04:43:02.020318   23076 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 04:43:02.023640   23076 config.go:182] Loaded profile config "newest-cni-385000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:43:02.023920   23076 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:43:02.027247   23076 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 04:43:02.034340   23076 start.go:297] selected driver: qemu2
	I0923 04:43:02.034347   23076 start.go:901] validating driver "qemu2" against &{Name:newest-cni-385000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:newest-cni-385000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:43:02.034412   23076 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 04:43:02.036761   23076 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0923 04:43:02.036783   23076 cni.go:84] Creating CNI manager for ""
	I0923 04:43:02.036809   23076 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 04:43:02.036836   23076 start.go:340] cluster config:
	{Name:newest-cni-385000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-385000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:43:02.040359   23076 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:43:02.049310   23076 out.go:177] * Starting "newest-cni-385000" primary control-plane node in "newest-cni-385000" cluster
	I0923 04:43:02.053265   23076 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 04:43:02.053279   23076 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 04:43:02.053285   23076 cache.go:56] Caching tarball of preloaded images
	I0923 04:43:02.053335   23076 preload.go:172] Found /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0923 04:43:02.053340   23076 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 04:43:02.053397   23076 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/newest-cni-385000/config.json ...
	I0923 04:43:02.053857   23076 start.go:360] acquireMachinesLock for newest-cni-385000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:43:02.053887   23076 start.go:364] duration metric: took 23.666µs to acquireMachinesLock for "newest-cni-385000"
	I0923 04:43:02.053899   23076 start.go:96] Skipping create...Using existing machine configuration
	I0923 04:43:02.053904   23076 fix.go:54] fixHost starting: 
	I0923 04:43:02.054018   23076 fix.go:112] recreateIfNeeded on newest-cni-385000: state=Stopped err=<nil>
	W0923 04:43:02.054026   23076 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 04:43:02.058345   23076 out.go:177] * Restarting existing qemu2 VM for "newest-cni-385000" ...
	I0923 04:43:02.066266   23076 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:43:02.066300   23076 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/newest-cni-385000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/newest-cni-385000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/newest-cni-385000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:93:80:c7:12:ba -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/newest-cni-385000/disk.qcow2
	I0923 04:43:02.068273   23076 main.go:141] libmachine: STDOUT: 
	I0923 04:43:02.068293   23076 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:43:02.068324   23076 fix.go:56] duration metric: took 14.418959ms for fixHost
	I0923 04:43:02.068329   23076 start.go:83] releasing machines lock for "newest-cni-385000", held for 14.438125ms
	W0923 04:43:02.068336   23076 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:43:02.068366   23076 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:43:02.068370   23076 start.go:729] Will try again in 5 seconds ...
	I0923 04:43:07.070432   23076 start.go:360] acquireMachinesLock for newest-cni-385000: {Name:mk0381f5a44df4a5a0423d9c2c1abe88b3bf05df Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 04:43:07.070512   23076 start.go:364] duration metric: took 57µs to acquireMachinesLock for "newest-cni-385000"
	I0923 04:43:07.070539   23076 start.go:96] Skipping create...Using existing machine configuration
	I0923 04:43:07.070544   23076 fix.go:54] fixHost starting: 
	I0923 04:43:07.070670   23076 fix.go:112] recreateIfNeeded on newest-cni-385000: state=Stopped err=<nil>
	W0923 04:43:07.070675   23076 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 04:43:07.075321   23076 out.go:177] * Restarting existing qemu2 VM for "newest-cni-385000" ...
	I0923 04:43:07.079363   23076 qemu.go:418] Using hvf for hardware acceleration
	I0923 04:43:07.079441   23076 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/newest-cni-385000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19690-18362/.minikube/machines/newest-cni-385000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/newest-cni-385000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:93:80:c7:12:ba -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19690-18362/.minikube/machines/newest-cni-385000/disk.qcow2
	I0923 04:43:07.081526   23076 main.go:141] libmachine: STDOUT: 
	I0923 04:43:07.081541   23076 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0923 04:43:07.081558   23076 fix.go:56] duration metric: took 11.014041ms for fixHost
	I0923 04:43:07.081562   23076 start.go:83] releasing machines lock for "newest-cni-385000", held for 11.038959ms
	W0923 04:43:07.081600   23076 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-385000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-385000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0923 04:43:07.091339   23076 out.go:201] 
	W0923 04:43:07.098381   23076 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0923 04:43:07.098387   23076 out.go:270] * 
	* 
	W0923 04:43:07.098836   23076 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:43:07.115322   23076 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-385000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-385000 -n newest-cni-385000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-385000 -n newest-cni-385000: exit status 7 (31.33525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-385000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-953000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-953000 -n default-k8s-diff-port-953000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-953000 -n default-k8s-diff-port-953000: exit status 7 (32.183959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-953000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-953000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-953000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-953000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.092958ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-953000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-953000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-953000 -n default-k8s-diff-port-953000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-953000 -n default-k8s-diff-port-953000: exit status 7 (29.612917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-953000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-953000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-953000 -n default-k8s-diff-port-953000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-953000 -n default-k8s-diff-port-953000: exit status 7 (29.097541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-953000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-953000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-953000 --alsologtostderr -v=1: exit status 83 (41.243ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-953000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-953000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:43:06.958183   23095 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:43:06.958343   23095 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:43:06.958346   23095 out.go:358] Setting ErrFile to fd 2...
	I0923 04:43:06.958349   23095 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:43:06.958491   23095 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:43:06.958706   23095 out.go:352] Setting JSON to false
	I0923 04:43:06.958719   23095 mustload.go:65] Loading cluster: default-k8s-diff-port-953000
	I0923 04:43:06.958948   23095 config.go:182] Loaded profile config "default-k8s-diff-port-953000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:43:06.963401   23095 out.go:177] * The control-plane node default-k8s-diff-port-953000 host is not running: state=Stopped
	I0923 04:43:06.967363   23095 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-953000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-953000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-953000 -n default-k8s-diff-port-953000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-953000 -n default-k8s-diff-port-953000: exit status 7 (29.210792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-953000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-953000 -n default-k8s-diff-port-953000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-953000 -n default-k8s-diff-port-953000: exit status 7 (29.344709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-953000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-385000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-385000 -n newest-cni-385000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-385000 -n newest-cni-385000: exit status 7 (29.917083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-385000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-385000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-385000 --alsologtostderr -v=1: exit status 83 (44.630292ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-385000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-385000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:43:07.262093   23116 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:43:07.262262   23116 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:43:07.262266   23116 out.go:358] Setting ErrFile to fd 2...
	I0923 04:43:07.262268   23116 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:43:07.262385   23116 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:43:07.262624   23116 out.go:352] Setting JSON to false
	I0923 04:43:07.262632   23116 mustload.go:65] Loading cluster: newest-cni-385000
	I0923 04:43:07.262865   23116 config.go:182] Loaded profile config "newest-cni-385000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:43:07.266383   23116 out.go:177] * The control-plane node newest-cni-385000 host is not running: state=Stopped
	I0923 04:43:07.274324   23116 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-385000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-385000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-385000 -n newest-cni-385000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-385000 -n newest-cni-385000: exit status 7 (29.834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-385000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-385000 -n newest-cni-385000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-385000 -n newest-cni-385000: exit status 7 (30.930417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-385000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    

Test pass (80/258)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.31.1/json-events 7.52
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.11
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.11
21 TestBinaryMirror 0.38
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
35 TestHyperKitDriverInstallOrUpdate 10.85
39 TestErrorSpam/start 0.39
40 TestErrorSpam/status 0.09
41 TestErrorSpam/pause 0.12
42 TestErrorSpam/unpause 0.12
43 TestErrorSpam/stop 9.43
46 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/CacheCmd/cache/add_remote 1.64
55 TestFunctional/serial/CacheCmd/cache/add_local 1.07
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
57 TestFunctional/serial/CacheCmd/cache/list 0.03
60 TestFunctional/serial/CacheCmd/cache/delete 0.07
69 TestFunctional/parallel/ConfigCmd 0.22
71 TestFunctional/parallel/DryRun 0.28
72 TestFunctional/parallel/InternationalLanguage 0.11
78 TestFunctional/parallel/AddonsCmd 0.1
93 TestFunctional/parallel/License 0.35
94 TestFunctional/parallel/Version/short 0.04
101 TestFunctional/parallel/ImageCommands/Setup 1.8
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
122 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
124 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.07
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.09
126 TestFunctional/parallel/ProfileCmd/profile_list 0.1
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.09
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
135 TestFunctional/delete_echo-server_images 0.07
136 TestFunctional/delete_my-image_image 0.02
137 TestFunctional/delete_minikube_cached_images 0.02
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 3.1
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.2
193 TestMainNoArgs 0.03
238 TestStoppedBinaryUpgrade/Setup 1.36
240 TestStoppedBinaryUpgrade/MinikubeLogs 0.83
251 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
257 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
258 TestNoKubernetes/serial/ProfileList 0.1
259 TestNoKubernetes/serial/Stop 3.25
261 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.05
275 TestStartStop/group/old-k8s-version/serial/Stop 2.01
276 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.13
286 TestStartStop/group/no-preload/serial/Stop 3.61
287 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
297 TestStartStop/group/embed-certs/serial/Stop 2.82
298 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.1
310 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.84
311 TestStartStop/group/newest-cni/serial/DeployApp 0
312 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
313 TestStartStop/group/newest-cni/serial/Stop 2.03
314 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
316 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
322 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0923 04:16:25.456725   18914 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0923 04:16:25.457122   18914 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-294000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-294000: exit status 85 (94.646708ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-294000 | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |          |
	|         | -p download-only-294000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 04:16:12
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 04:16:12.070127   18917 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:16:12.070287   18917 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:16:12.070290   18917 out.go:358] Setting ErrFile to fd 2...
	I0923 04:16:12.070293   18917 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:16:12.070426   18917 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	W0923 04:16:12.070513   18917 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19690-18362/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19690-18362/.minikube/config/config.json: no such file or directory
	I0923 04:16:12.071767   18917 out.go:352] Setting JSON to true
	I0923 04:16:12.088156   18917 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8143,"bootTime":1727082029,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:16:12.088215   18917 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:16:12.092700   18917 out.go:97] [download-only-294000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:16:12.092892   18917 notify.go:220] Checking for updates...
	W0923 04:16:12.092900   18917 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 04:16:12.097643   18917 out.go:169] MINIKUBE_LOCATION=19690
	I0923 04:16:12.101651   18917 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:16:12.106676   18917 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:16:12.110663   18917 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:16:12.113612   18917 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	W0923 04:16:12.119650   18917 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 04:16:12.119857   18917 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:16:12.122651   18917 out.go:97] Using the qemu2 driver based on user configuration
	I0923 04:16:12.122672   18917 start.go:297] selected driver: qemu2
	I0923 04:16:12.122676   18917 start.go:901] validating driver "qemu2" against <nil>
	I0923 04:16:12.122761   18917 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 04:16:12.125661   18917 out.go:169] Automatically selected the socket_vmnet network
	I0923 04:16:12.130943   18917 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0923 04:16:12.131038   18917 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 04:16:12.131094   18917 cni.go:84] Creating CNI manager for ""
	I0923 04:16:12.131135   18917 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0923 04:16:12.131172   18917 start.go:340] cluster config:
	{Name:download-only-294000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-294000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:16:12.134971   18917 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:16:12.138677   18917 out.go:97] Downloading VM boot image ...
	I0923 04:16:12.138697   18917 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/iso/arm64/minikube-v1.34.0-1726784654-19672-arm64.iso
	I0923 04:16:17.764375   18917 out.go:97] Starting "download-only-294000" primary control-plane node in "download-only-294000" cluster
	I0923 04:16:17.764399   18917 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 04:16:17.820396   18917 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0923 04:16:17.820406   18917 cache.go:56] Caching tarball of preloaded images
	I0923 04:16:17.820581   18917 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 04:16:17.825322   18917 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0923 04:16:17.825329   18917 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0923 04:16:17.908742   18917 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0923 04:16:24.192303   18917 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0923 04:16:24.192495   18917 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0923 04:16:24.888677   18917 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0923 04:16:24.888900   18917 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/download-only-294000/config.json ...
	I0923 04:16:24.888920   18917 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19690-18362/.minikube/profiles/download-only-294000/config.json: {Name:mk4bb948808b67e8544bc89978580e0632134115 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 04:16:24.889162   18917 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 04:16:24.890052   18917 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0923 04:16:25.399468   18917 out.go:193] 
	W0923 04:16:25.408490   18917 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19690-18362/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10890d6c0 0x10890d6c0 0x10890d6c0 0x10890d6c0 0x10890d6c0 0x10890d6c0 0x10890d6c0] Decompressors:map[bz2:0x14000759950 gz:0x14000759958 tar:0x14000759880 tar.bz2:0x14000759890 tar.gz:0x140007598d0 tar.xz:0x14000759920 tar.zst:0x14000759930 tbz2:0x14000759890 tgz:0x140007598d0 txz:0x14000759920 tzst:0x14000759930 xz:0x14000759960 zip:0x14000759970 zst:0x14000759968] Getters:map[file:0x14001a62590 http:0x14000884140 https:0x14000884190] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0923 04:16:25.408520   18917 out_reason.go:110] 
	W0923 04:16:25.418211   18917 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 04:16:25.422319   18917 out.go:193] 
	
	
	* The control-plane node download-only-294000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-294000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-294000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (7.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-913000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-913000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 : (7.517587917s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (7.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0923 04:16:33.332813   18914 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0923 04:16:33.332880   18914 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-913000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-913000: exit status 85 (77.32425ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-294000 | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
	|         | -p download-only-294000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT | 23 Sep 24 04:16 PDT |
	| delete  | -p download-only-294000        | download-only-294000 | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT | 23 Sep 24 04:16 PDT |
	| start   | -o=json --download-only        | download-only-913000 | jenkins | v1.34.0 | 23 Sep 24 04:16 PDT |                     |
	|         | -p download-only-913000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 04:16:25
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 04:16:25.843479   18952 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:16:25.843597   18952 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:16:25.843601   18952 out.go:358] Setting ErrFile to fd 2...
	I0923 04:16:25.843604   18952 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:16:25.843745   18952 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:16:25.844812   18952 out.go:352] Setting JSON to true
	I0923 04:16:25.860927   18952 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8156,"bootTime":1727082029,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:16:25.860997   18952 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:16:25.865993   18952 out.go:97] [download-only-913000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:16:25.866102   18952 notify.go:220] Checking for updates...
	I0923 04:16:25.870018   18952 out.go:169] MINIKUBE_LOCATION=19690
	I0923 04:16:25.871572   18952 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:16:25.874961   18952 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:16:25.877974   18952 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:16:25.879425   18952 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	W0923 04:16:25.885974   18952 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 04:16:25.886168   18952 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:16:25.888919   18952 out.go:97] Using the qemu2 driver based on user configuration
	I0923 04:16:25.888927   18952 start.go:297] selected driver: qemu2
	I0923 04:16:25.888932   18952 start.go:901] validating driver "qemu2" against <nil>
	I0923 04:16:25.888976   18952 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 04:16:25.891910   18952 out.go:169] Automatically selected the socket_vmnet network
	I0923 04:16:25.897232   18952 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0923 04:16:25.897330   18952 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 04:16:25.897350   18952 cni.go:84] Creating CNI manager for ""
	I0923 04:16:25.897379   18952 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 04:16:25.897384   18952 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 04:16:25.897432   18952 start.go:340] cluster config:
	{Name:download-only-913000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-913000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:16:25.900920   18952 iso.go:125] acquiring lock: {Name:mkc4ea805bb8ee225fdff95431783dea7102ac86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 04:16:25.904000   18952 out.go:97] Starting "download-only-913000" primary control-plane node in "download-only-913000" cluster
	I0923 04:16:25.904009   18952 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 04:16:25.958237   18952 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0923 04:16:25.958266   18952 cache.go:56] Caching tarball of preloaded images
	I0923 04:16:25.958442   18952 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 04:16:25.962531   18952 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0923 04:16:25.962538   18952 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0923 04:16:26.041374   18952 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /Users/jenkins/minikube-integration/19690-18362/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-913000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-913000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-913000
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.38s)

                                                
                                                
=== RUN   TestBinaryMirror
I0923 04:16:33.840260   18914 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-684000 --alsologtostderr --binary-mirror http://127.0.0.1:53065 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-684000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-684000
--- PASS: TestBinaryMirror (0.38s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-040000
addons_test.go:975: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-040000: exit status 85 (58.175875ms)

                                                
                                                
-- stdout --
	* Profile "addons-040000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-040000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-040000
addons_test.go:986: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-040000: exit status 85 (63.75675ms)

                                                
                                                
-- stdout --
	* Profile "addons-040000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-040000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.85s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I0923 04:39:18.434656   18914 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0923 04:39:18.434806   18914 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
W0923 04:39:20.339337   18914 install.go:62] docker-machine-driver-hyperkit: exit status 1
W0923 04:39:20.339567   18914 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I0923 04:39:20.339620   18914 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2933313152/001/docker-machine-driver-hyperkit
--- PASS: TestHyperKitDriverInstallOrUpdate (10.85s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-693000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-693000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-693000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-693000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-693000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 status: exit status 7 (31.84475ms)

                                                
                                                
-- stdout --
	nospam-693000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-693000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-693000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-693000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 status: exit status 7 (30.171667ms)

                                                
                                                
-- stdout --
	nospam-693000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-693000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-693000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-693000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 status: exit status 7 (30.021917ms)

                                                
                                                
-- stdout --
	nospam-693000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-693000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.09s)

                                                
                                    
x
+
TestErrorSpam/pause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-693000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-693000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 pause: exit status 83 (39.457625ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-693000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-693000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-693000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-693000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 pause: exit status 83 (40.412958ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-693000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-693000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-693000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-693000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 pause: exit status 83 (38.92475ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-693000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-693000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-693000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-693000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 unpause: exit status 83 (40.570125ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-693000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-693000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-693000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-693000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 unpause: exit status 83 (40.056333ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-693000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-693000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-693000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-693000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 unpause: exit status 83 (40.740833ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-693000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-693000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-693000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.12s)

                                                
                                    
x
+
TestErrorSpam/stop (9.43s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-693000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-693000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 stop: (3.807147166s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-693000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-693000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 stop: (2.163627541s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-693000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-693000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-693000 stop: (3.452462125s)
--- PASS: TestErrorSpam/stop (9.43s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19690-18362/.minikube/files/etc/test/nested/copy/18914/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-539000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local1519937870/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 cache add minikube-local-cache-test:functional-539000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 cache delete minikube-local-cache-test:functional-539000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-539000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 config get cpus: exit status 14 (31.121958ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 config get cpus: exit status 14 (32.036291ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-539000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-539000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (156.58375ms)

                                                
                                                
-- stdout --
	* [functional-539000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:18:06.192465   19597 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:18:06.192660   19597 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:18:06.192664   19597 out.go:358] Setting ErrFile to fd 2...
	I0923 04:18:06.192668   19597 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:18:06.192832   19597 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:18:06.194129   19597 out.go:352] Setting JSON to false
	I0923 04:18:06.214195   19597 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8257,"bootTime":1727082029,"procs":489,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:18:06.214267   19597 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:18:06.220119   19597 out.go:177] * [functional-539000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0923 04:18:06.226090   19597 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 04:18:06.226182   19597 notify.go:220] Checking for updates...
	I0923 04:18:06.233075   19597 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:18:06.236053   19597 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:18:06.239111   19597 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:18:06.240533   19597 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	I0923 04:18:06.244075   19597 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 04:18:06.247462   19597 config.go:182] Loaded profile config "functional-539000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:18:06.247763   19597 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:18:06.249429   19597 out.go:177] * Using the qemu2 driver based on existing profile
	I0923 04:18:06.256122   19597 start.go:297] selected driver: qemu2
	I0923 04:18:06.256131   19597 start.go:901] validating driver "qemu2" against &{Name:functional-539000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-539000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:18:06.256203   19597 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 04:18:06.263083   19597 out.go:201] 
	W0923 04:18:06.267102   19597 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0923 04:18:06.271115   19597 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-539000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-539000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-539000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (108.834833ms)

                                                
                                                
-- stdout --
	* [functional-539000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 04:18:06.424258   19608 out.go:345] Setting OutFile to fd 1 ...
	I0923 04:18:06.424375   19608 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:18:06.424379   19608 out.go:358] Setting ErrFile to fd 2...
	I0923 04:18:06.424382   19608 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 04:18:06.424520   19608 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19690-18362/.minikube/bin
	I0923 04:18:06.425912   19608 out.go:352] Setting JSON to false
	I0923 04:18:06.442676   19608 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8257,"bootTime":1727082029,"procs":489,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0923 04:18:06.442770   19608 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 04:18:06.446902   19608 out.go:177] * [functional-539000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	I0923 04:18:06.454142   19608 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 04:18:06.454231   19608 notify.go:220] Checking for updates...
	I0923 04:18:06.460068   19608 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	I0923 04:18:06.463107   19608 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0923 04:18:06.464419   19608 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 04:18:06.467084   19608 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	I0923 04:18:06.470202   19608 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 04:18:06.473478   19608 config.go:182] Loaded profile config "functional-539000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 04:18:06.473763   19608 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 04:18:06.477969   19608 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0923 04:18:06.485075   19608 start.go:297] selected driver: qemu2
	I0923 04:18:06.485083   19608 start.go:901] validating driver "qemu2" against &{Name:functional-539000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-539000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 04:18:06.485145   19608 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 04:18:06.491076   19608 out.go:201] 
	W0923 04:18:06.495034   19608 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0923 04:18:06.499041   19608 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.780490083s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-539000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-539000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 image rm kicbase/echo-server:functional-539000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-539000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 image save --daemon kicbase/echo-server:functional-539000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-539000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "58.301792ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "37.907083ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "50.851417ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "33.882333ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.011737292s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-539000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-539000
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-539000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-539000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.1s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-733000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-733000 --output=json --user=testUser: (3.097093125s)
--- PASS: TestJSONOutput/stop/Command (3.10s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-939000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-939000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (96.787375ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"424df317-677f-4ed8-af6c-a753f7945d6f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-939000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d7938a4a-b397-49a7-a628-661ed757f90b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19690"}}
	{"specversion":"1.0","id":"1c9f7017-7a8d-41af-a0b4-f48db32f0460","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig"}}
	{"specversion":"1.0","id":"0e5a76a9-e870-46bd-8769-d28ce851107d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"2109211c-595c-493a-ac3c-26fe7787435b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b653f297-f405-41ac-89f9-c6b328f879d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube"}}
	{"specversion":"1.0","id":"dab2394a-ba49-4a13-8c9c-a8b762228367","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3bc86d06-9858-4e92-8f71-d25115702d69","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-939000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-939000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.36s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-231000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-857000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-857000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (101.846583ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-857000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19690
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19690-18362/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19690-18362/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-857000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-857000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (42.607958ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-857000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-857000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-857000
I0923 04:39:20.824760   18914 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2933313152/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x1085e6d40 0x1085e6d40 0x1085e6d40 0x1085e6d40 0x1085e6d40 0x1085e6d40 0x1085e6d40] Decompressors:map[bz2:0x140006ed660 gz:0x140006ed668 tar:0x140006ed610 tar.bz2:0x140006ed620 tar.gz:0x140006ed630 tar.xz:0x140006ed640 tar.zst:0x140006ed650 tbz2:0x140006ed620 tgz:0x140006ed630 txz:0x140006ed640 tzst:0x140006ed650 xz:0x140006ed670 zip:0x140006ed680 zst:0x140006ed678] Getters:map[file:0x14000a07650 http:0x1400004c5a0 https:0x1400004c730] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I0923 04:39:20.824891   18914 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2933313152/001/docker-machine-driver-hyperkit
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-857000: (3.249876958s)
--- PASS: TestNoKubernetes/serial/Stop (3.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-857000 "sudo systemctl is-active --quiet service kubelet"
I0923 04:39:29.232892   18914 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2933313152/002/docker-machine-driver-hyperkit]
I0923 04:39:29.244573   18914 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2933313152/002/docker-machine-driver-hyperkit]
I0923 04:39:29.255550   18914 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2933313152/002/docker-machine-driver-hyperkit]
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-857000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (52.615583ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-857000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-857000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-579000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-579000 --alsologtostderr -v=3: (2.014787625s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-579000 -n old-k8s-version-579000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-579000 -n old-k8s-version-579000: exit status 7 (57.55425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-579000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-836000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-836000 --alsologtostderr -v=3: (3.613164459s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-836000 -n no-preload-836000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-836000 -n no-preload-836000: exit status 7 (57.143958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-836000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (2.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-946000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-946000 --alsologtostderr -v=3: (2.820620375s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (2.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-946000 -n embed-certs-946000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-946000 -n embed-certs-946000: exit status 7 (36.783708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-946000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-953000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-953000 --alsologtostderr -v=3: (3.837012292s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-385000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-385000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-385000 --alsologtostderr -v=3: (2.026432917s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-953000 -n default-k8s-diff-port-953000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-953000 -n default-k8s-diff-port-953000: exit status 7 (58.530125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-953000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-385000 -n newest-cni-385000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-385000 -n newest-cni-385000: exit status 7 (51.365959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-385000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/258)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-539000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2755319766/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727090255796309000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2755319766/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727090255796309000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2755319766/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727090255796309000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2755319766/001/test-1727090255796309000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (58.211708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
I0923 04:17:35.855052   18914 retry.go:31] will retry after 444.518671ms: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.909625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
I0923 04:17:36.390871   18914 retry.go:31] will retry after 798.262257ms: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.851917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
I0923 04:17:37.277421   18914 retry.go:31] will retry after 1.572746037s: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.423834ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
I0923 04:17:38.940077   18914 retry.go:31] will retry after 1.059494611s: exit status 83
I0923 04:17:39.691033   18914 retry.go:31] will retry after 3.433793606s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.60775ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
I0923 04:17:40.091585   18914 retry.go:31] will retry after 2.53700898s: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.367375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
I0923 04:17:42.718343   18914 retry.go:31] will retry after 2.131448032s: exit status 83
I0923 04:17:43.127176   18914 retry.go:31] will retry after 5.280473519s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.04725ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 ssh "sudo umount -f /mount-9p": exit status 83 (48.669459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-539000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-539000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2755319766/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (9.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (11.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-539000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3287193380/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (62.222333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
I0923 04:17:45.168025   18914 retry.go:31] will retry after 401.376601ms: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.653166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
I0923 04:17:45.658453   18914 retry.go:31] will retry after 695.325144ms: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.646958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
I0923 04:17:46.441840   18914 retry.go:31] will retry after 1.6214906s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.118291ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
I0923 04:17:48.152909   18914 retry.go:31] will retry after 1.056580131s: exit status 83
I0923 04:17:48.410073   18914 retry.go:31] will retry after 21.876630233s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.263291ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
I0923 04:17:49.296107   18914 retry.go:31] will retry after 2.4413988s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (82.966708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
I0923 04:17:51.823330   18914 retry.go:31] will retry after 4.521553114s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (79.952041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 ssh "sudo umount -f /mount-9p": exit status 83 (47.153625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-539000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-539000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3287193380/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (11.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (9.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-539000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1552010554/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-539000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1552010554/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-539000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1552010554/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T" /mount1: exit status 83 (81.0085ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
I0923 04:17:56.676107   18914 retry.go:31] will retry after 442.110654ms: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T" /mount1: exit status 83 (84.609583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
I0923 04:17:57.205220   18914 retry.go:31] will retry after 819.675118ms: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T" /mount1: exit status 83 (87.4275ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
I0923 04:17:58.114731   18914 retry.go:31] will retry after 1.634268963s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T" /mount1: exit status 83 (86.888167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
I0923 04:17:59.838257   18914 retry.go:31] will retry after 979.072369ms: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T" /mount1: exit status 83 (86.06675ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
I0923 04:18:00.905857   18914 retry.go:31] will retry after 1.764340362s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T" /mount1: exit status 83 (89.814416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
I0923 04:18:02.762307   18914 retry.go:31] will retry after 2.889625581s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-539000 ssh "findmnt -T" /mount1: exit status 83 (84.358458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-539000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-539000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-539000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1552010554/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-539000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1552010554/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-539000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1552010554/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (9.54s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-897000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-897000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-897000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-897000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-897000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-897000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-897000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-897000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-897000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-897000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-897000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-897000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-897000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-897000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-897000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-897000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-897000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-897000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-897000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-897000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-897000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-897000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-897000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-897000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-897000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-897000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-897000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-897000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-897000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-897000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-897000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-897000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-897000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-897000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-897000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-897000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-897000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-897000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-897000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-897000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-897000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-897000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-897000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-897000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-897000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-897000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-897000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-897000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-897000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-897000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-897000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-897000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-897000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-897000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-897000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-897000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-897000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-897000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-897000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-897000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-897000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-897000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-897000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-897000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-897000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-897000"

                                                
                                                
----------------------- debugLogs end: cilium-897000 [took: 2.327463917s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-897000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-897000
--- SKIP: TestNetworkPlugins/group/cilium (2.43s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-936000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-936000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                    
Copied to clipboard