Test Report: QEMU_macOS 19758

                    
                      487a5cf556320fbeb648c9691968ff5b5aeb4ad7:2024-10-25:36805
                    
                

Test fail (156/258)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 14.83
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 10.03
27 TestAddons/Setup 10.18
28 TestCertOptions 10.17
29 TestCertExpiration 195.52
30 TestDockerFlags 10.15
31 TestForceSystemdFlag 10.15
32 TestForceSystemdEnv 10.22
38 TestErrorSpam/setup 9.9
47 TestFunctional/serial/StartWithProxy 9.93
49 TestFunctional/serial/SoftStart 5.27
50 TestFunctional/serial/KubeContext 0.06
51 TestFunctional/serial/KubectlGetPods 0.06
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.05
59 TestFunctional/serial/CacheCmd/cache/cache_reload 0.18
61 TestFunctional/serial/MinikubeKubectlCmd 0.75
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.19
63 TestFunctional/serial/ExtraConfig 5.27
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 0.08
66 TestFunctional/serial/LogsFileCmd 0.08
67 TestFunctional/serial/InvalidService 0.03
70 TestFunctional/parallel/DashboardCmd 0.21
73 TestFunctional/parallel/StatusCmd 0.14
77 TestFunctional/parallel/ServiceCmdConnect 0.14
79 TestFunctional/parallel/PersistentVolumeClaim 0.03
81 TestFunctional/parallel/SSHCmd 0.14
82 TestFunctional/parallel/CpCmd 0.29
84 TestFunctional/parallel/FileSync 0.08
85 TestFunctional/parallel/CertSync 0.32
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.05
95 TestFunctional/parallel/Version/components 0.05
96 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
97 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
98 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
99 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
100 TestFunctional/parallel/ImageCommands/ImageBuild 0.13
102 TestFunctional/parallel/DockerEnv/bash 0.05
103 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
104 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
105 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
106 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
107 TestFunctional/parallel/ServiceCmd/List 0.05
108 TestFunctional/parallel/ServiceCmd/JSONOutput 0.05
109 TestFunctional/parallel/ServiceCmd/HTTPS 0.05
110 TestFunctional/parallel/ServiceCmd/Format 0.05
111 TestFunctional/parallel/ServiceCmd/URL 0.04
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 117.09
118 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.32
119 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.29
120 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.14
121 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
123 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.08
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.07
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 30.05
141 TestMultiControlPlane/serial/StartCluster 10.03
142 TestMultiControlPlane/serial/DeployApp 114.9
143 TestMultiControlPlane/serial/PingHostFromPods 0.1
144 TestMultiControlPlane/serial/AddWorkerNode 0.08
145 TestMultiControlPlane/serial/NodeLabels 0.06
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.09
147 TestMultiControlPlane/serial/CopyFile 0.07
148 TestMultiControlPlane/serial/StopSecondaryNode 0.12
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.09
150 TestMultiControlPlane/serial/RestartSecondaryNode 50.25
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.09
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 9.12
153 TestMultiControlPlane/serial/DeleteSecondaryNode 0.12
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.09
155 TestMultiControlPlane/serial/StopCluster 3.47
156 TestMultiControlPlane/serial/RestartCluster 5.26
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.09
158 TestMultiControlPlane/serial/AddSecondaryNode 0.08
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.09
162 TestImageBuild/serial/Setup 9.98
165 TestJSONOutput/start/Command 9.88
171 TestJSONOutput/pause/Command 0.09
177 TestJSONOutput/unpause/Command 0.05
194 TestMinikubeProfile 10.07
197 TestMountStart/serial/StartWithMountFirst 10.2
200 TestMultiNode/serial/FreshStart2Nodes 9.92
201 TestMultiNode/serial/DeployApp2Nodes 79.54
202 TestMultiNode/serial/PingHostFrom2Pods 0.1
203 TestMultiNode/serial/AddNode 0.08
204 TestMultiNode/serial/MultiNodeLabels 0.06
205 TestMultiNode/serial/ProfileList 0.09
206 TestMultiNode/serial/CopyFile 0.07
207 TestMultiNode/serial/StopNode 0.16
208 TestMultiNode/serial/StartAfterStop 45.88
209 TestMultiNode/serial/RestartKeepsNodes 9.04
210 TestMultiNode/serial/DeleteNode 0.12
211 TestMultiNode/serial/StopMultiNode 3.01
212 TestMultiNode/serial/RestartMultiNode 5.26
213 TestMultiNode/serial/ValidateNameConflict 20.23
217 TestPreload 10.14
219 TestScheduledStopUnix 9.99
220 TestSkaffold 12.54
223 TestRunningBinaryUpgrade 588.64
225 TestKubernetesUpgrade 18.47
238 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 0.92
239 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 0.96
241 TestStoppedBinaryUpgrade/Upgrade 574.76
243 TestPause/serial/Start 10.01
253 TestNoKubernetes/serial/StartWithK8s 10.16
254 TestNoKubernetes/serial/StartWithStopK8s 5.33
255 TestNoKubernetes/serial/Start 5.31
259 TestNoKubernetes/serial/StartNoArgs 5.32
261 TestNetworkPlugins/group/auto/Start 9.88
262 TestNetworkPlugins/group/kindnet/Start 9.83
263 TestNetworkPlugins/group/calico/Start 9.82
264 TestNetworkPlugins/group/custom-flannel/Start 9.83
265 TestNetworkPlugins/group/false/Start 9.76
266 TestNetworkPlugins/group/enable-default-cni/Start 9.79
267 TestNetworkPlugins/group/flannel/Start 9.98
268 TestNetworkPlugins/group/bridge/Start 9.86
269 TestNetworkPlugins/group/kubenet/Start 9.83
271 TestStartStop/group/old-k8s-version/serial/FirstStart 9.95
273 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
274 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
277 TestStartStop/group/old-k8s-version/serial/SecondStart 5.26
278 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.04
279 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
280 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.08
281 TestStartStop/group/old-k8s-version/serial/Pause 0.11
283 TestStartStop/group/no-preload/serial/FirstStart 10.02
284 TestStartStop/group/no-preload/serial/DeployApp 0.1
285 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
288 TestStartStop/group/no-preload/serial/SecondStart 5.25
289 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.04
290 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
291 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
292 TestStartStop/group/no-preload/serial/Pause 0.11
294 TestStartStop/group/embed-certs/serial/FirstStart 10.02
296 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 11.48
297 TestStartStop/group/embed-certs/serial/DeployApp 0.1
298 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.16
300 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
301 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
304 TestStartStop/group/embed-certs/serial/SecondStart 5.26
306 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.48
307 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.04
308 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
309 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.08
310 TestStartStop/group/embed-certs/serial/Pause 0.11
312 TestStartStop/group/newest-cni/serial/FirstStart 9.89
313 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.04
314 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
315 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
316 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
321 TestStartStop/group/newest-cni/serial/SecondStart 5.27
324 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
325 TestStartStop/group/newest-cni/serial/Pause 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (14.83s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-826000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-826000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (14.825898458s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4529d1a3-81e3-4ef9-853e-efdaa584828e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-826000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1f0a8cd4-fe40-42ce-942e-cc7615a3bb92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19758"}}
	{"specversion":"1.0","id":"8429c23e-8f35-4941-bf2c-3f6931a5f6d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig"}}
	{"specversion":"1.0","id":"36890118-6ba9-4b6d-a224-2a7b421b3727","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"4b5f6194-fb74-4425-92e2-3b7be4f7febe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"749b71d8-469d-43ef-807d-7672a5436e1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube"}}
	{"specversion":"1.0","id":"01afcdf3-fb6b-44c9-a442-4d1990a56d10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"4ad6ac9a-292f-4926-a534-6b08ce57e61c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"d2eed213-4436-4495-950c-dd2533575f1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"65a1f101-0c3a-4f1d-8bd9-bf5377900ed3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1a4e1af4-a7a6-44c1-be34-98979c10ea83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-826000\" primary control-plane node in \"download-only-826000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"3003ef86-83bf-428d-b25e-977d4ee1113a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ba4b4d47-6026-4c1d-a6c7-5fe723d60b8b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19758-10490/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109bfd320 0x109bfd320 0x109bfd320 0x109bfd320 0x109bfd320 0x109bfd320 0x109bfd320] Decompressors:map[bz2:0x140001267b0 gz:0x140001267b8 tar:0x14000126710 tar.bz2:0x14000126720 tar.gz:0x14000126730 tar.xz:0x14000126740 tar.zst:0x14000126780 tbz2:0x14000126720 tgz:0x1
4000126730 txz:0x14000126740 tzst:0x14000126780 xz:0x14000126a00 zip:0x14000126a10 zst:0x14000126a08] Getters:map[file:0x140007147a0 http:0x1400047e500 https:0x1400047e550] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"0c911ce5-fd89-4ac8-ab65-aefc86d528ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 15:57:56.806003   10999 out.go:345] Setting OutFile to fd 1 ...
	I1025 15:57:56.806162   10999 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 15:57:56.806166   10999 out.go:358] Setting ErrFile to fd 2...
	I1025 15:57:56.806168   10999 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 15:57:56.806313   10999 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	W1025 15:57:56.806408   10999 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19758-10490/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19758-10490/.minikube/config/config.json: no such file or directory
	I1025 15:57:56.807771   10999 out.go:352] Setting JSON to true
	I1025 15:57:56.825797   10999 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6314,"bootTime":1729890762,"procs":514,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 15:57:56.825872   10999 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 15:57:56.831626   10999 out.go:97] [download-only-826000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 15:57:56.831745   10999 notify.go:220] Checking for updates...
	W1025 15:57:56.831814   10999 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball: no such file or directory
	I1025 15:57:56.835700   10999 out.go:169] MINIKUBE_LOCATION=19758
	I1025 15:57:56.838855   10999 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 15:57:56.843689   10999 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 15:57:56.846691   10999 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 15:57:56.849701   10999 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	W1025 15:57:56.855692   10999 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 15:57:56.855941   10999 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 15:57:56.859619   10999 out.go:97] Using the qemu2 driver based on user configuration
	I1025 15:57:56.859640   10999 start.go:297] selected driver: qemu2
	I1025 15:57:56.859662   10999 start.go:901] validating driver "qemu2" against <nil>
	I1025 15:57:56.859748   10999 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 15:57:56.862671   10999 out.go:169] Automatically selected the socket_vmnet network
	I1025 15:57:56.868193   10999 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1025 15:57:56.868297   10999 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 15:57:56.868354   10999 cni.go:84] Creating CNI manager for ""
	I1025 15:57:56.868397   10999 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1025 15:57:56.868445   10999 start.go:340] cluster config:
	{Name:download-only-826000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-826000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 15:57:56.873194   10999 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 15:57:56.876740   10999 out.go:97] Downloading VM boot image ...
	I1025 15:57:56.876752   10999 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso
	I1025 15:58:02.996092   10999 out.go:97] Starting "download-only-826000" primary control-plane node in "download-only-826000" cluster
	I1025 15:58:02.996141   10999 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1025 15:58:03.057085   10999 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1025 15:58:03.057108   10999 cache.go:56] Caching tarball of preloaded images
	I1025 15:58:03.057313   10999 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1025 15:58:03.061406   10999 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1025 15:58:03.061412   10999 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1025 15:58:03.148592   10999 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1025 15:58:10.288107   10999 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1025 15:58:10.288296   10999 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1025 15:58:10.982400   10999 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1025 15:58:10.982634   10999 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/download-only-826000/config.json ...
	I1025 15:58:10.982653   10999 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/download-only-826000/config.json: {Name:mke9af12784eec6b05a832561d51659fb6697777 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 15:58:10.982909   10999 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1025 15:58:10.983178   10999 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1025 15:58:11.549360   10999 out.go:193] 
	W1025 15:58:11.554496   10999 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19758-10490/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109bfd320 0x109bfd320 0x109bfd320 0x109bfd320 0x109bfd320 0x109bfd320 0x109bfd320] Decompressors:map[bz2:0x140001267b0 gz:0x140001267b8 tar:0x14000126710 tar.bz2:0x14000126720 tar.gz:0x14000126730 tar.xz:0x14000126740 tar.zst:0x14000126780 tbz2:0x14000126720 tgz:0x14000126730 txz:0x14000126740 tzst:0x14000126780 xz:0x14000126a00 zip:0x14000126a10 zst:0x14000126a08] Getters:map[file:0x140007147a0 http:0x1400047e500 https:0x1400047e550] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1025 15:58:11.554523   10999 out_reason.go:110] 
	W1025 15:58:11.563414   10999 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 15:58:11.566342   10999 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-826000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (14.83s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19758-10490/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.03s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-436000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-436000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.878472416s)

                                                
                                                
-- stdout --
	* [offline-docker-436000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-436000" primary control-plane node in "offline-docker-436000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-436000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:09:34.594499   12683 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:09:34.594712   12683 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:09:34.594716   12683 out.go:358] Setting ErrFile to fd 2...
	I1025 16:09:34.594719   12683 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:09:34.594857   12683 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:09:34.596143   12683 out.go:352] Setting JSON to false
	I1025 16:09:34.615422   12683 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7012,"bootTime":1729890762,"procs":507,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 16:09:34.615508   12683 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 16:09:34.621189   12683 out.go:177] * [offline-docker-436000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 16:09:34.629247   12683 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 16:09:34.629259   12683 notify.go:220] Checking for updates...
	I1025 16:09:34.636203   12683 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:09:34.639261   12683 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 16:09:34.642213   12683 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 16:09:34.645202   12683 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	I1025 16:09:34.648204   12683 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 16:09:34.651557   12683 config.go:182] Loaded profile config "multinode-747000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:09:34.651624   12683 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 16:09:34.656195   12683 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 16:09:34.663195   12683 start.go:297] selected driver: qemu2
	I1025 16:09:34.663202   12683 start.go:901] validating driver "qemu2" against <nil>
	I1025 16:09:34.663209   12683 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 16:09:34.665700   12683 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 16:09:34.668208   12683 out.go:177] * Automatically selected the socket_vmnet network
	I1025 16:09:34.671347   12683 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 16:09:34.671366   12683 cni.go:84] Creating CNI manager for ""
	I1025 16:09:34.671392   12683 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 16:09:34.671396   12683 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 16:09:34.671444   12683 start.go:340] cluster config:
	{Name:offline-docker-436000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-436000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:09:34.676172   12683 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:09:34.684213   12683 out.go:177] * Starting "offline-docker-436000" primary control-plane node in "offline-docker-436000" cluster
	I1025 16:09:34.688035   12683 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1025 16:09:34.688068   12683 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1025 16:09:34.688084   12683 cache.go:56] Caching tarball of preloaded images
	I1025 16:09:34.688169   12683 preload.go:172] Found /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 16:09:34.688175   12683 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1025 16:09:34.688242   12683 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/offline-docker-436000/config.json ...
	I1025 16:09:34.688252   12683 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/offline-docker-436000/config.json: {Name:mk9597fa209ae75e02492bf72618d5065ea0f11d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:09:34.688537   12683 start.go:360] acquireMachinesLock for offline-docker-436000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:09:34.688581   12683 start.go:364] duration metric: took 36.792µs to acquireMachinesLock for "offline-docker-436000"
	I1025 16:09:34.688598   12683 start.go:93] Provisioning new machine with config: &{Name:offline-docker-436000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-436000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:09:34.688630   12683 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:09:34.692228   12683 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1025 16:09:34.707750   12683 start.go:159] libmachine.API.Create for "offline-docker-436000" (driver="qemu2")
	I1025 16:09:34.707782   12683 client.go:168] LocalClient.Create starting
	I1025 16:09:34.707865   12683 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:09:34.707905   12683 main.go:141] libmachine: Decoding PEM data...
	I1025 16:09:34.707918   12683 main.go:141] libmachine: Parsing certificate...
	I1025 16:09:34.707980   12683 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:09:34.708012   12683 main.go:141] libmachine: Decoding PEM data...
	I1025 16:09:34.708028   12683 main.go:141] libmachine: Parsing certificate...
	I1025 16:09:34.708420   12683 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:09:34.866025   12683 main.go:141] libmachine: Creating SSH key...
	I1025 16:09:34.971129   12683 main.go:141] libmachine: Creating Disk image...
	I1025 16:09:34.971139   12683 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:09:34.971358   12683 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/offline-docker-436000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/offline-docker-436000/disk.qcow2
	I1025 16:09:34.982176   12683 main.go:141] libmachine: STDOUT: 
	I1025 16:09:34.982208   12683 main.go:141] libmachine: STDERR: 
	I1025 16:09:34.982284   12683 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/offline-docker-436000/disk.qcow2 +20000M
	I1025 16:09:34.996225   12683 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:09:34.996244   12683 main.go:141] libmachine: STDERR: 
	I1025 16:09:34.996269   12683 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/offline-docker-436000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/offline-docker-436000/disk.qcow2
	I1025 16:09:34.996274   12683 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:09:34.996284   12683 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:09:34.996325   12683 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/offline-docker-436000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/offline-docker-436000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/offline-docker-436000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:79:3f:76:f7:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/offline-docker-436000/disk.qcow2
	I1025 16:09:34.998265   12683 main.go:141] libmachine: STDOUT: 
	I1025 16:09:34.998280   12683 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:09:34.998301   12683 client.go:171] duration metric: took 290.513541ms to LocalClient.Create
	I1025 16:09:36.998950   12683 start.go:128] duration metric: took 2.310330208s to createHost
	I1025 16:09:36.998977   12683 start.go:83] releasing machines lock for "offline-docker-436000", held for 2.3104055s
	W1025 16:09:36.998986   12683 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:09:37.003580   12683 out.go:177] * Deleting "offline-docker-436000" in qemu2 ...
	W1025 16:09:37.013357   12683 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:09:37.013368   12683 start.go:729] Will try again in 5 seconds ...
	I1025 16:09:42.015507   12683 start.go:360] acquireMachinesLock for offline-docker-436000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:09:42.016016   12683 start.go:364] duration metric: took 426.459µs to acquireMachinesLock for "offline-docker-436000"
	I1025 16:09:42.016161   12683 start.go:93] Provisioning new machine with config: &{Name:offline-docker-436000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-436000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:09:42.016475   12683 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:09:42.027123   12683 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1025 16:09:42.076858   12683 start.go:159] libmachine.API.Create for "offline-docker-436000" (driver="qemu2")
	I1025 16:09:42.076902   12683 client.go:168] LocalClient.Create starting
	I1025 16:09:42.077047   12683 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:09:42.077129   12683 main.go:141] libmachine: Decoding PEM data...
	I1025 16:09:42.077149   12683 main.go:141] libmachine: Parsing certificate...
	I1025 16:09:42.077222   12683 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:09:42.077280   12683 main.go:141] libmachine: Decoding PEM data...
	I1025 16:09:42.077293   12683 main.go:141] libmachine: Parsing certificate...
	I1025 16:09:42.077879   12683 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:09:42.259302   12683 main.go:141] libmachine: Creating SSH key...
	I1025 16:09:42.365139   12683 main.go:141] libmachine: Creating Disk image...
	I1025 16:09:42.365146   12683 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:09:42.365340   12683 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/offline-docker-436000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/offline-docker-436000/disk.qcow2
	I1025 16:09:42.375280   12683 main.go:141] libmachine: STDOUT: 
	I1025 16:09:42.375301   12683 main.go:141] libmachine: STDERR: 
	I1025 16:09:42.375372   12683 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/offline-docker-436000/disk.qcow2 +20000M
	I1025 16:09:42.383796   12683 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:09:42.383811   12683 main.go:141] libmachine: STDERR: 
	I1025 16:09:42.383825   12683 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/offline-docker-436000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/offline-docker-436000/disk.qcow2
	I1025 16:09:42.383830   12683 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:09:42.383840   12683 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:09:42.383877   12683 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/offline-docker-436000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/offline-docker-436000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/offline-docker-436000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:55:2c:93:11:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/offline-docker-436000/disk.qcow2
	I1025 16:09:42.385640   12683 main.go:141] libmachine: STDOUT: 
	I1025 16:09:42.385672   12683 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:09:42.385686   12683 client.go:171] duration metric: took 308.780208ms to LocalClient.Create
	I1025 16:09:44.387859   12683 start.go:128] duration metric: took 2.371365875s to createHost
	I1025 16:09:44.387908   12683 start.go:83] releasing machines lock for "offline-docker-436000", held for 2.371883166s
	W1025 16:09:44.388260   12683 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-436000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-436000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:09:44.403002   12683 out.go:201] 
	W1025 16:09:44.407041   12683 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:09:44.407078   12683 out.go:270] * 
	* 
	W1025 16:09:44.409844   12683 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 16:09:44.422934   12683 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-436000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-10-25 16:09:44.438677 -0700 PDT m=+707.695287251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-436000 -n offline-docker-436000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-436000 -n offline-docker-436000: exit status 7 (74.26825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-436000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-436000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-436000
I1025 16:09:44.531147   10998 install.go:79] stdout: 
W1025 16:09:44.531259   10998 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate911872255/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate911872255/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1025 16:09:44.531284   10998 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate911872255/001/docker-machine-driver-hyperkit]
I1025 16:09:44.545828   10998 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate911872255/001/docker-machine-driver-hyperkit]
I1025 16:09:44.558019   10998 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate911872255/001/docker-machine-driver-hyperkit]
I1025 16:09:44.569707   10998 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate911872255/001/docker-machine-driver-hyperkit]
I1025 16:09:44.591516   10998 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1025 16:09:44.591642   10998 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
--- FAIL: TestOffline (10.03s)

                                                
                                    
x
+
TestAddons/Setup (10.18s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-362000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-362000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: exit status 80 (10.182516625s)

                                                
                                                
-- stdout --
	* [addons-362000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-362000" primary control-plane node in "addons-362000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-362000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 15:58:20.389017   11077 out.go:345] Setting OutFile to fd 1 ...
	I1025 15:58:20.389171   11077 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 15:58:20.389175   11077 out.go:358] Setting ErrFile to fd 2...
	I1025 15:58:20.389177   11077 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 15:58:20.389311   11077 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 15:58:20.390442   11077 out.go:352] Setting JSON to false
	I1025 15:58:20.407998   11077 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6338,"bootTime":1729890762,"procs":510,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 15:58:20.408064   11077 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 15:58:20.412880   11077 out.go:177] * [addons-362000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 15:58:20.420001   11077 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 15:58:20.420063   11077 notify.go:220] Checking for updates...
	I1025 15:58:20.426970   11077 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 15:58:20.429995   11077 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 15:58:20.433023   11077 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 15:58:20.436000   11077 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	I1025 15:58:20.439009   11077 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 15:58:20.442233   11077 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 15:58:20.445971   11077 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 15:58:20.452986   11077 start.go:297] selected driver: qemu2
	I1025 15:58:20.452994   11077 start.go:901] validating driver "qemu2" against <nil>
	I1025 15:58:20.453002   11077 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 15:58:20.455641   11077 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 15:58:20.458944   11077 out.go:177] * Automatically selected the socket_vmnet network
	I1025 15:58:20.462112   11077 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 15:58:20.462145   11077 cni.go:84] Creating CNI manager for ""
	I1025 15:58:20.462169   11077 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 15:58:20.462174   11077 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 15:58:20.462215   11077 start.go:340] cluster config:
	{Name:addons-362000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-362000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 15:58:20.466948   11077 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 15:58:20.472936   11077 out.go:177] * Starting "addons-362000" primary control-plane node in "addons-362000" cluster
	I1025 15:58:20.477012   11077 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1025 15:58:20.477035   11077 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1025 15:58:20.477045   11077 cache.go:56] Caching tarball of preloaded images
	I1025 15:58:20.477126   11077 preload.go:172] Found /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 15:58:20.477132   11077 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1025 15:58:20.477328   11077 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/addons-362000/config.json ...
	I1025 15:58:20.477338   11077 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/addons-362000/config.json: {Name:mkb7198fe26d886fc4804182457e879b2cdc90b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 15:58:20.477695   11077 start.go:360] acquireMachinesLock for addons-362000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 15:58:20.477781   11077 start.go:364] duration metric: took 80.792µs to acquireMachinesLock for "addons-362000"
	I1025 15:58:20.477793   11077 start.go:93] Provisioning new machine with config: &{Name:addons-362000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-362000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 15:58:20.477821   11077 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 15:58:20.485034   11077 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1025 15:58:20.501847   11077 start.go:159] libmachine.API.Create for "addons-362000" (driver="qemu2")
	I1025 15:58:20.501900   11077 client.go:168] LocalClient.Create starting
	I1025 15:58:20.502046   11077 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 15:58:20.548528   11077 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 15:58:20.656684   11077 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 15:58:20.857526   11077 main.go:141] libmachine: Creating SSH key...
	I1025 15:58:20.914866   11077 main.go:141] libmachine: Creating Disk image...
	I1025 15:58:20.914872   11077 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 15:58:20.915107   11077 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/addons-362000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/addons-362000/disk.qcow2
	I1025 15:58:20.924937   11077 main.go:141] libmachine: STDOUT: 
	I1025 15:58:20.924955   11077 main.go:141] libmachine: STDERR: 
	I1025 15:58:20.925012   11077 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/addons-362000/disk.qcow2 +20000M
	I1025 15:58:20.933510   11077 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 15:58:20.933527   11077 main.go:141] libmachine: STDERR: 
	I1025 15:58:20.933541   11077 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/addons-362000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/addons-362000/disk.qcow2
	I1025 15:58:20.933546   11077 main.go:141] libmachine: Starting QEMU VM...
	I1025 15:58:20.933584   11077 qemu.go:418] Using hvf for hardware acceleration
	I1025 15:58:20.933610   11077 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/addons-362000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/addons-362000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/addons-362000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:83:b8:06:c6:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/addons-362000/disk.qcow2
	I1025 15:58:20.935432   11077 main.go:141] libmachine: STDOUT: 
	I1025 15:58:20.935454   11077 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 15:58:20.935487   11077 client.go:171] duration metric: took 433.5735ms to LocalClient.Create
	I1025 15:58:22.937641   11077 start.go:128] duration metric: took 2.459832s to createHost
	I1025 15:58:22.937741   11077 start.go:83] releasing machines lock for "addons-362000", held for 2.459952s
	W1025 15:58:22.937802   11077 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 15:58:22.948121   11077 out.go:177] * Deleting "addons-362000" in qemu2 ...
	W1025 15:58:22.975670   11077 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 15:58:22.975700   11077 start.go:729] Will try again in 5 seconds ...
	I1025 15:58:27.976790   11077 start.go:360] acquireMachinesLock for addons-362000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 15:58:27.977365   11077 start.go:364] duration metric: took 480.042µs to acquireMachinesLock for "addons-362000"
	I1025 15:58:27.977493   11077 start.go:93] Provisioning new machine with config: &{Name:addons-362000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-362000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 15:58:27.977856   11077 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 15:58:27.994712   11077 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1025 15:58:28.046054   11077 start.go:159] libmachine.API.Create for "addons-362000" (driver="qemu2")
	I1025 15:58:28.046117   11077 client.go:168] LocalClient.Create starting
	I1025 15:58:28.046239   11077 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 15:58:28.046324   11077 main.go:141] libmachine: Decoding PEM data...
	I1025 15:58:28.046340   11077 main.go:141] libmachine: Parsing certificate...
	I1025 15:58:28.046411   11077 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 15:58:28.046473   11077 main.go:141] libmachine: Decoding PEM data...
	I1025 15:58:28.046484   11077 main.go:141] libmachine: Parsing certificate...
	I1025 15:58:28.047176   11077 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 15:58:28.217605   11077 main.go:141] libmachine: Creating SSH key...
	I1025 15:58:28.473187   11077 main.go:141] libmachine: Creating Disk image...
	I1025 15:58:28.473197   11077 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 15:58:28.473379   11077 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/addons-362000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/addons-362000/disk.qcow2
	I1025 15:58:28.483654   11077 main.go:141] libmachine: STDOUT: 
	I1025 15:58:28.483676   11077 main.go:141] libmachine: STDERR: 
	I1025 15:58:28.483737   11077 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/addons-362000/disk.qcow2 +20000M
	I1025 15:58:28.492238   11077 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 15:58:28.492256   11077 main.go:141] libmachine: STDERR: 
	I1025 15:58:28.492276   11077 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/addons-362000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/addons-362000/disk.qcow2
	I1025 15:58:28.492280   11077 main.go:141] libmachine: Starting QEMU VM...
	I1025 15:58:28.492289   11077 qemu.go:418] Using hvf for hardware acceleration
	I1025 15:58:28.492327   11077 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/addons-362000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/addons-362000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/addons-362000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:c2:68:fc:ed:f8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/addons-362000/disk.qcow2
	I1025 15:58:28.494126   11077 main.go:141] libmachine: STDOUT: 
	I1025 15:58:28.494141   11077 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 15:58:28.494155   11077 client.go:171] duration metric: took 448.0395ms to LocalClient.Create
	I1025 15:58:30.496463   11077 start.go:128] duration metric: took 2.518553875s to createHost
	I1025 15:58:30.496565   11077 start.go:83] releasing machines lock for "addons-362000", held for 2.519208958s
	W1025 15:58:30.497011   11077 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p addons-362000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-362000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 15:58:30.505400   11077 out.go:201] 
	W1025 15:58:30.514564   11077 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 15:58:30.514590   11077 out.go:270] * 
	* 
	W1025 15:58:30.517477   11077 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 15:58:30.524434   11077 out.go:201] 

                                                
                                                
** /stderr **
addons_test.go:109: out/minikube-darwin-arm64 start -p addons-362000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher failed: exit status 80
--- FAIL: TestAddons/Setup (10.18s)

                                                
                                    
x
+
TestCertOptions (10.17s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-507000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-507000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.886923125s)

                                                
                                                
-- stdout --
	* [cert-options-507000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-507000" primary control-plane node in "cert-options-507000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-507000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-507000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-507000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-507000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-507000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (87.073292ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-507000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-507000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-507000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-507000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-507000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-507000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (46.626167ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-507000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-507000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-507000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-507000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-507000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-10-25 16:10:15.011061 -0700 PDT m=+738.267883834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-507000 -n cert-options-507000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-507000 -n cert-options-507000: exit status 7 (34.347583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-507000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-507000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-507000
--- FAIL: TestCertOptions (10.17s)

                                                
                                    
x
+
TestCertExpiration (195.52s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-057000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-057000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.119051792s)

                                                
                                                
-- stdout --
	* [cert-expiration-057000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-057000" primary control-plane node in "cert-expiration-057000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-057000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-057000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-057000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-057000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-057000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.244577042s)

                                                
                                                
-- stdout --
	* [cert-expiration-057000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-057000" primary control-plane node in "cert-expiration-057000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-057000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-057000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-057000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-057000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-057000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-057000" primary control-plane node in "cert-expiration-057000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-057000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-057000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-057000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-10-25 16:13:15.265433 -0700 PDT m=+918.523505626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-057000 -n cert-expiration-057000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-057000 -n cert-expiration-057000: exit status 7 (70.361334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-057000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-057000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-057000
--- FAIL: TestCertExpiration (195.52s)

                                                
                                    
x
+
TestDockerFlags (10.15s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-171000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-171000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.886842709s)

                                                
                                                
-- stdout --
	* [docker-flags-171000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-171000" primary control-plane node in "docker-flags-171000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-171000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:09:54.843608   12875 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:09:54.843771   12875 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:09:54.843775   12875 out.go:358] Setting ErrFile to fd 2...
	I1025 16:09:54.843777   12875 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:09:54.843907   12875 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:09:54.845098   12875 out.go:352] Setting JSON to false
	I1025 16:09:54.862804   12875 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7032,"bootTime":1729890762,"procs":510,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 16:09:54.862874   12875 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 16:09:54.868092   12875 out.go:177] * [docker-flags-171000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 16:09:54.875081   12875 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 16:09:54.875124   12875 notify.go:220] Checking for updates...
	I1025 16:09:54.880965   12875 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:09:54.884026   12875 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 16:09:54.887041   12875 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 16:09:54.889974   12875 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	I1025 16:09:54.893022   12875 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 16:09:54.896426   12875 config.go:182] Loaded profile config "force-systemd-flag-958000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:09:54.896501   12875 config.go:182] Loaded profile config "multinode-747000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:09:54.896569   12875 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 16:09:54.900964   12875 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 16:09:54.908037   12875 start.go:297] selected driver: qemu2
	I1025 16:09:54.908044   12875 start.go:901] validating driver "qemu2" against <nil>
	I1025 16:09:54.908051   12875 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 16:09:54.910574   12875 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 16:09:54.914997   12875 out.go:177] * Automatically selected the socket_vmnet network
	I1025 16:09:54.918071   12875 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1025 16:09:54.918089   12875 cni.go:84] Creating CNI manager for ""
	I1025 16:09:54.918114   12875 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 16:09:54.918118   12875 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 16:09:54.918153   12875 start.go:340] cluster config:
	{Name:docker-flags-171000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-171000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:09:54.922853   12875 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:09:54.929976   12875 out.go:177] * Starting "docker-flags-171000" primary control-plane node in "docker-flags-171000" cluster
	I1025 16:09:54.934026   12875 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1025 16:09:54.934042   12875 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1025 16:09:54.934053   12875 cache.go:56] Caching tarball of preloaded images
	I1025 16:09:54.934128   12875 preload.go:172] Found /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 16:09:54.934134   12875 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1025 16:09:54.934196   12875 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/docker-flags-171000/config.json ...
	I1025 16:09:54.934208   12875 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/docker-flags-171000/config.json: {Name:mkb4c13b0ca60e72600fd5a3bb27e639693b6bf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:09:54.934594   12875 start.go:360] acquireMachinesLock for docker-flags-171000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:09:54.934647   12875 start.go:364] duration metric: took 45.25µs to acquireMachinesLock for "docker-flags-171000"
	I1025 16:09:54.934659   12875 start.go:93] Provisioning new machine with config: &{Name:docker-flags-171000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-171000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:09:54.934704   12875 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:09:54.943051   12875 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1025 16:09:54.960873   12875 start.go:159] libmachine.API.Create for "docker-flags-171000" (driver="qemu2")
	I1025 16:09:54.960895   12875 client.go:168] LocalClient.Create starting
	I1025 16:09:54.960976   12875 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:09:54.961018   12875 main.go:141] libmachine: Decoding PEM data...
	I1025 16:09:54.961031   12875 main.go:141] libmachine: Parsing certificate...
	I1025 16:09:54.961070   12875 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:09:54.961101   12875 main.go:141] libmachine: Decoding PEM data...
	I1025 16:09:54.961109   12875 main.go:141] libmachine: Parsing certificate...
	I1025 16:09:54.961484   12875 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:09:55.118306   12875 main.go:141] libmachine: Creating SSH key...
	I1025 16:09:55.216969   12875 main.go:141] libmachine: Creating Disk image...
	I1025 16:09:55.216975   12875 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:09:55.217184   12875 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/docker-flags-171000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/docker-flags-171000/disk.qcow2
	I1025 16:09:55.227260   12875 main.go:141] libmachine: STDOUT: 
	I1025 16:09:55.227283   12875 main.go:141] libmachine: STDERR: 
	I1025 16:09:55.227365   12875 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/docker-flags-171000/disk.qcow2 +20000M
	I1025 16:09:55.236054   12875 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:09:55.236070   12875 main.go:141] libmachine: STDERR: 
	I1025 16:09:55.236088   12875 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/docker-flags-171000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/docker-flags-171000/disk.qcow2
	I1025 16:09:55.236093   12875 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:09:55.236106   12875 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:09:55.236131   12875 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/docker-flags-171000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/docker-flags-171000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/docker-flags-171000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:8b:94:62:a9:df -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/docker-flags-171000/disk.qcow2
	I1025 16:09:55.238017   12875 main.go:141] libmachine: STDOUT: 
	I1025 16:09:55.238031   12875 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:09:55.238048   12875 client.go:171] duration metric: took 277.151042ms to LocalClient.Create
	I1025 16:09:57.240207   12875 start.go:128] duration metric: took 2.305495834s to createHost
	I1025 16:09:57.240268   12875 start.go:83] releasing machines lock for "docker-flags-171000", held for 2.305625375s
	W1025 16:09:57.240343   12875 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:09:57.253402   12875 out.go:177] * Deleting "docker-flags-171000" in qemu2 ...
	W1025 16:09:57.288846   12875 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:09:57.288884   12875 start.go:729] Will try again in 5 seconds ...
	I1025 16:10:02.291072   12875 start.go:360] acquireMachinesLock for docker-flags-171000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:10:02.291440   12875 start.go:364] duration metric: took 284.875µs to acquireMachinesLock for "docker-flags-171000"
	I1025 16:10:02.291503   12875 start.go:93] Provisioning new machine with config: &{Name:docker-flags-171000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-171000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:10:02.291776   12875 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:10:02.313570   12875 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1025 16:10:02.362845   12875 start.go:159] libmachine.API.Create for "docker-flags-171000" (driver="qemu2")
	I1025 16:10:02.362916   12875 client.go:168] LocalClient.Create starting
	I1025 16:10:02.363065   12875 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:10:02.363143   12875 main.go:141] libmachine: Decoding PEM data...
	I1025 16:10:02.363160   12875 main.go:141] libmachine: Parsing certificate...
	I1025 16:10:02.363222   12875 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:10:02.363278   12875 main.go:141] libmachine: Decoding PEM data...
	I1025 16:10:02.363289   12875 main.go:141] libmachine: Parsing certificate...
	I1025 16:10:02.364021   12875 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:10:02.532520   12875 main.go:141] libmachine: Creating SSH key...
	I1025 16:10:02.631726   12875 main.go:141] libmachine: Creating Disk image...
	I1025 16:10:02.631734   12875 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:10:02.631936   12875 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/docker-flags-171000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/docker-flags-171000/disk.qcow2
	I1025 16:10:02.641722   12875 main.go:141] libmachine: STDOUT: 
	I1025 16:10:02.641743   12875 main.go:141] libmachine: STDERR: 
	I1025 16:10:02.641795   12875 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/docker-flags-171000/disk.qcow2 +20000M
	I1025 16:10:02.650203   12875 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:10:02.650220   12875 main.go:141] libmachine: STDERR: 
	I1025 16:10:02.650232   12875 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/docker-flags-171000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/docker-flags-171000/disk.qcow2
	I1025 16:10:02.650237   12875 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:10:02.650248   12875 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:10:02.650291   12875 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/docker-flags-171000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/docker-flags-171000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/docker-flags-171000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:dc:15:e3:8e:db -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/docker-flags-171000/disk.qcow2
	I1025 16:10:02.652059   12875 main.go:141] libmachine: STDOUT: 
	I1025 16:10:02.652074   12875 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:10:02.652092   12875 client.go:171] duration metric: took 289.172708ms to LocalClient.Create
	I1025 16:10:04.654239   12875 start.go:128] duration metric: took 2.362428875s to createHost
	I1025 16:10:04.654289   12875 start.go:83] releasing machines lock for "docker-flags-171000", held for 2.362845209s
	W1025 16:10:04.654567   12875 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-171000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-171000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:10:04.666551   12875 out.go:201] 
	W1025 16:10:04.674863   12875 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:10:04.674892   12875 out.go:270] * 
	* 
	W1025 16:10:04.676253   12875 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 16:10:04.688537   12875 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-171000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-171000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-171000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (86.5865ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-171000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-171000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-171000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-171000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-171000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-171000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-171000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-171000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-171000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (50.62175ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-171000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-171000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-171000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-171000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-171000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-171000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-10-25 16:10:04.837541 -0700 PDT m=+728.094292751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-171000 -n docker-flags-171000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-171000 -n docker-flags-171000: exit status 7 (33.068875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-171000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-171000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-171000
--- FAIL: TestDockerFlags (10.15s)

                                                
                                    
x
+
TestForceSystemdFlag (10.15s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-958000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-958000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.940117959s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-958000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-958000" primary control-plane node in "force-systemd-flag-958000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-958000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:09:49.781735   12854 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:09:49.781913   12854 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:09:49.781917   12854 out.go:358] Setting ErrFile to fd 2...
	I1025 16:09:49.781919   12854 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:09:49.782045   12854 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:09:49.783202   12854 out.go:352] Setting JSON to false
	I1025 16:09:49.800871   12854 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7027,"bootTime":1729890762,"procs":510,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 16:09:49.800937   12854 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 16:09:49.807194   12854 out.go:177] * [force-systemd-flag-958000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 16:09:49.823134   12854 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 16:09:49.823167   12854 notify.go:220] Checking for updates...
	I1025 16:09:49.833054   12854 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:09:49.837104   12854 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 16:09:49.840136   12854 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 16:09:49.843028   12854 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	I1025 16:09:49.846061   12854 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 16:09:49.849497   12854 config.go:182] Loaded profile config "force-systemd-env-462000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:09:49.849578   12854 config.go:182] Loaded profile config "multinode-747000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:09:49.849624   12854 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 16:09:49.853061   12854 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 16:09:49.860144   12854 start.go:297] selected driver: qemu2
	I1025 16:09:49.860153   12854 start.go:901] validating driver "qemu2" against <nil>
	I1025 16:09:49.860159   12854 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 16:09:49.862832   12854 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 16:09:49.866013   12854 out.go:177] * Automatically selected the socket_vmnet network
	I1025 16:09:49.869161   12854 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 16:09:49.869179   12854 cni.go:84] Creating CNI manager for ""
	I1025 16:09:49.869201   12854 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 16:09:49.869208   12854 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 16:09:49.869235   12854 start.go:340] cluster config:
	{Name:force-systemd-flag-958000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-958000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:09:49.874210   12854 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:09:49.883109   12854 out.go:177] * Starting "force-systemd-flag-958000" primary control-plane node in "force-systemd-flag-958000" cluster
	I1025 16:09:49.887090   12854 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1025 16:09:49.887113   12854 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1025 16:09:49.887124   12854 cache.go:56] Caching tarball of preloaded images
	I1025 16:09:49.887224   12854 preload.go:172] Found /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 16:09:49.887230   12854 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1025 16:09:49.887284   12854 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/force-systemd-flag-958000/config.json ...
	I1025 16:09:49.887295   12854 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/force-systemd-flag-958000/config.json: {Name:mk967160db6b7dc0b52a073660c858f6104307ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:09:49.887774   12854 start.go:360] acquireMachinesLock for force-systemd-flag-958000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:09:49.887828   12854 start.go:364] duration metric: took 45.583µs to acquireMachinesLock for "force-systemd-flag-958000"
	I1025 16:09:49.887841   12854 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-958000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-958000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:09:49.887873   12854 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:09:49.892090   12854 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1025 16:09:49.909668   12854 start.go:159] libmachine.API.Create for "force-systemd-flag-958000" (driver="qemu2")
	I1025 16:09:49.909691   12854 client.go:168] LocalClient.Create starting
	I1025 16:09:49.909767   12854 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:09:49.909807   12854 main.go:141] libmachine: Decoding PEM data...
	I1025 16:09:49.909817   12854 main.go:141] libmachine: Parsing certificate...
	I1025 16:09:49.909853   12854 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:09:49.909883   12854 main.go:141] libmachine: Decoding PEM data...
	I1025 16:09:49.909893   12854 main.go:141] libmachine: Parsing certificate...
	I1025 16:09:49.910345   12854 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:09:50.066576   12854 main.go:141] libmachine: Creating SSH key...
	I1025 16:09:50.178076   12854 main.go:141] libmachine: Creating Disk image...
	I1025 16:09:50.178084   12854 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:09:50.178264   12854 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/force-systemd-flag-958000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/force-systemd-flag-958000/disk.qcow2
	I1025 16:09:50.188134   12854 main.go:141] libmachine: STDOUT: 
	I1025 16:09:50.188153   12854 main.go:141] libmachine: STDERR: 
	I1025 16:09:50.188207   12854 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/force-systemd-flag-958000/disk.qcow2 +20000M
	I1025 16:09:50.196656   12854 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:09:50.196689   12854 main.go:141] libmachine: STDERR: 
	I1025 16:09:50.196704   12854 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/force-systemd-flag-958000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/force-systemd-flag-958000/disk.qcow2
	I1025 16:09:50.196709   12854 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:09:50.196720   12854 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:09:50.196745   12854 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/force-systemd-flag-958000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/force-systemd-flag-958000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/force-systemd-flag-958000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:a4:b2:fa:85:af -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/force-systemd-flag-958000/disk.qcow2
	I1025 16:09:50.198561   12854 main.go:141] libmachine: STDOUT: 
	I1025 16:09:50.198574   12854 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:09:50.198593   12854 client.go:171] duration metric: took 288.896417ms to LocalClient.Create
	I1025 16:09:52.200743   12854 start.go:128] duration metric: took 2.312871083s to createHost
	I1025 16:09:52.200794   12854 start.go:83] releasing machines lock for "force-systemd-flag-958000", held for 2.312972916s
	W1025 16:09:52.200854   12854 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:09:52.226019   12854 out.go:177] * Deleting "force-systemd-flag-958000" in qemu2 ...
	W1025 16:09:52.247945   12854 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:09:52.247962   12854 start.go:729] Will try again in 5 seconds ...
	I1025 16:09:57.250147   12854 start.go:360] acquireMachinesLock for force-systemd-flag-958000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:09:57.250607   12854 start.go:364] duration metric: took 365.5µs to acquireMachinesLock for "force-systemd-flag-958000"
	I1025 16:09:57.250676   12854 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-958000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-958000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:09:57.251068   12854 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:09:57.262606   12854 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1025 16:09:57.314505   12854 start.go:159] libmachine.API.Create for "force-systemd-flag-958000" (driver="qemu2")
	I1025 16:09:57.314554   12854 client.go:168] LocalClient.Create starting
	I1025 16:09:57.314706   12854 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:09:57.314791   12854 main.go:141] libmachine: Decoding PEM data...
	I1025 16:09:57.314809   12854 main.go:141] libmachine: Parsing certificate...
	I1025 16:09:57.314870   12854 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:09:57.314933   12854 main.go:141] libmachine: Decoding PEM data...
	I1025 16:09:57.314947   12854 main.go:141] libmachine: Parsing certificate...
	I1025 16:09:57.315644   12854 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:09:57.484677   12854 main.go:141] libmachine: Creating SSH key...
	I1025 16:09:57.613680   12854 main.go:141] libmachine: Creating Disk image...
	I1025 16:09:57.613690   12854 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:09:57.613933   12854 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/force-systemd-flag-958000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/force-systemd-flag-958000/disk.qcow2
	I1025 16:09:57.624363   12854 main.go:141] libmachine: STDOUT: 
	I1025 16:09:57.624382   12854 main.go:141] libmachine: STDERR: 
	I1025 16:09:57.624450   12854 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/force-systemd-flag-958000/disk.qcow2 +20000M
	I1025 16:09:57.632898   12854 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:09:57.632916   12854 main.go:141] libmachine: STDERR: 
	I1025 16:09:57.632927   12854 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/force-systemd-flag-958000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/force-systemd-flag-958000/disk.qcow2
	I1025 16:09:57.632932   12854 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:09:57.632939   12854 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:09:57.632965   12854 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/force-systemd-flag-958000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/force-systemd-flag-958000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/force-systemd-flag-958000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:d6:29:0a:9d:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/force-systemd-flag-958000/disk.qcow2
	I1025 16:09:57.634819   12854 main.go:141] libmachine: STDOUT: 
	I1025 16:09:57.634832   12854 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:09:57.634850   12854 client.go:171] duration metric: took 320.292708ms to LocalClient.Create
	I1025 16:09:59.637013   12854 start.go:128] duration metric: took 2.385930791s to createHost
	I1025 16:09:59.637067   12854 start.go:83] releasing machines lock for "force-systemd-flag-958000", held for 2.386454791s
	W1025 16:09:59.637392   12854 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-958000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-958000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:09:59.651026   12854 out.go:201] 
	W1025 16:09:59.661340   12854 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:09:59.661383   12854 out.go:270] * 
	* 
	W1025 16:09:59.663980   12854 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 16:09:59.675992   12854 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-958000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-958000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-958000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (89.143542ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-958000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-958000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-958000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-10-25 16:09:59.782158 -0700 PDT m=+723.038875084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-958000 -n force-systemd-flag-958000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-958000 -n force-systemd-flag-958000: exit status 7 (36.641708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-958000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-958000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-958000
--- FAIL: TestForceSystemdFlag (10.15s)

                                                
                                    
x
+
TestForceSystemdEnv (10.22s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-462000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
I1025 16:09:46.360770   10998 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W1025 16:09:46.360831   10998 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W1025 16:09:46.360904   10998 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1025 16:09:46.360940   10998 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate911872255/002/docker-machine-driver-hyperkit
I1025 16:09:46.772236   10998 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate911872255/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x1051426e0 0x1051426e0 0x1051426e0 0x1051426e0 0x1051426e0 0x1051426e0 0x1051426e0] Decompressors:map[bz2:0x1400000f840 gz:0x1400000f848 tar:0x1400000f7f0 tar.bz2:0x1400000f800 tar.gz:0x1400000f810 tar.xz:0x1400000f820 tar.zst:0x1400000f830 tbz2:0x1400000f800 tgz:0x1400000f810 txz:0x1400000f820 tzst:0x1400000f830 xz:0x1400000f850 zip:0x1400000f860 zst:0x1400000f858] Getters:map[file:0x140008dd100 http:0x14000721d60 https:0x14000721db0] Dir:false ProgressListener:<nil> Insecure:false DisableSym
links:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1025 16:09:46.772362   10998 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate911872255/002/docker-machine-driver-hyperkit
I1025 16:09:49.697229   10998 install.go:79] stdout: 
W1025 16:09:49.697465   10998 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate911872255/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate911872255/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1025 16:09:49.697493   10998 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate911872255/002/docker-machine-driver-hyperkit]
I1025 16:09:49.714504   10998 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate911872255/002/docker-machine-driver-hyperkit]
I1025 16:09:49.727654   10998 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate911872255/002/docker-machine-driver-hyperkit]
I1025 16:09:49.738391   10998 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate911872255/002/docker-machine-driver-hyperkit]
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-462000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.004717959s)

                                                
                                                
-- stdout --
	* [force-systemd-env-462000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-462000" primary control-plane node in "force-systemd-env-462000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-462000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:09:44.624798   12834 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:09:44.624976   12834 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:09:44.624981   12834 out.go:358] Setting ErrFile to fd 2...
	I1025 16:09:44.624983   12834 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:09:44.625129   12834 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:09:44.626483   12834 out.go:352] Setting JSON to false
	I1025 16:09:44.646518   12834 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7022,"bootTime":1729890762,"procs":510,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 16:09:44.646608   12834 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 16:09:44.651759   12834 out.go:177] * [force-systemd-env-462000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 16:09:44.659951   12834 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 16:09:44.659963   12834 notify.go:220] Checking for updates...
	I1025 16:09:44.666835   12834 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:09:44.669952   12834 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 16:09:44.672830   12834 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 16:09:44.675835   12834 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	I1025 16:09:44.678892   12834 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1025 16:09:44.680551   12834 config.go:182] Loaded profile config "multinode-747000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:09:44.680600   12834 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 16:09:44.684873   12834 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 16:09:44.691730   12834 start.go:297] selected driver: qemu2
	I1025 16:09:44.691736   12834 start.go:901] validating driver "qemu2" against <nil>
	I1025 16:09:44.691742   12834 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 16:09:44.694220   12834 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 16:09:44.696827   12834 out.go:177] * Automatically selected the socket_vmnet network
	I1025 16:09:44.699918   12834 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 16:09:44.699931   12834 cni.go:84] Creating CNI manager for ""
	I1025 16:09:44.699952   12834 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 16:09:44.699956   12834 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 16:09:44.699978   12834 start.go:340] cluster config:
	{Name:force-systemd-env-462000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-462000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:09:44.704197   12834 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:09:44.711866   12834 out.go:177] * Starting "force-systemd-env-462000" primary control-plane node in "force-systemd-env-462000" cluster
	I1025 16:09:44.715919   12834 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1025 16:09:44.715932   12834 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1025 16:09:44.715937   12834 cache.go:56] Caching tarball of preloaded images
	I1025 16:09:44.716001   12834 preload.go:172] Found /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 16:09:44.716006   12834 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1025 16:09:44.716050   12834 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/force-systemd-env-462000/config.json ...
	I1025 16:09:44.716060   12834 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/force-systemd-env-462000/config.json: {Name:mke97ab5ba887fcea19129f5f98483fe4f3d2884 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:09:44.716330   12834 start.go:360] acquireMachinesLock for force-systemd-env-462000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:09:44.716378   12834 start.go:364] duration metric: took 39.625µs to acquireMachinesLock for "force-systemd-env-462000"
	I1025 16:09:44.716390   12834 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-462000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-462000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:09:44.716422   12834 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:09:44.724881   12834 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1025 16:09:44.739594   12834 start.go:159] libmachine.API.Create for "force-systemd-env-462000" (driver="qemu2")
	I1025 16:09:44.739621   12834 client.go:168] LocalClient.Create starting
	I1025 16:09:44.739690   12834 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:09:44.739728   12834 main.go:141] libmachine: Decoding PEM data...
	I1025 16:09:44.739741   12834 main.go:141] libmachine: Parsing certificate...
	I1025 16:09:44.739781   12834 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:09:44.739811   12834 main.go:141] libmachine: Decoding PEM data...
	I1025 16:09:44.739818   12834 main.go:141] libmachine: Parsing certificate...
	I1025 16:09:44.740202   12834 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:09:44.891635   12834 main.go:141] libmachine: Creating SSH key...
	I1025 16:09:44.958564   12834 main.go:141] libmachine: Creating Disk image...
	I1025 16:09:44.958571   12834 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:09:44.958798   12834 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/force-systemd-env-462000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/force-systemd-env-462000/disk.qcow2
	I1025 16:09:44.968960   12834 main.go:141] libmachine: STDOUT: 
	I1025 16:09:44.968974   12834 main.go:141] libmachine: STDERR: 
	I1025 16:09:44.969028   12834 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/force-systemd-env-462000/disk.qcow2 +20000M
	I1025 16:09:44.978023   12834 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:09:44.978040   12834 main.go:141] libmachine: STDERR: 
	I1025 16:09:44.978060   12834 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/force-systemd-env-462000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/force-systemd-env-462000/disk.qcow2
	I1025 16:09:44.978066   12834 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:09:44.978078   12834 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:09:44.978109   12834 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/force-systemd-env-462000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/force-systemd-env-462000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/force-systemd-env-462000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:5a:f2:27:20:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/force-systemd-env-462000/disk.qcow2
	I1025 16:09:44.980049   12834 main.go:141] libmachine: STDOUT: 
	I1025 16:09:44.980092   12834 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:09:44.980122   12834 client.go:171] duration metric: took 240.497209ms to LocalClient.Create
	I1025 16:09:46.982343   12834 start.go:128] duration metric: took 2.265902917s to createHost
	I1025 16:09:46.982465   12834 start.go:83] releasing machines lock for "force-systemd-env-462000", held for 2.266055834s
	W1025 16:09:46.982560   12834 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:09:46.991870   12834 out.go:177] * Deleting "force-systemd-env-462000" in qemu2 ...
	W1025 16:09:47.018200   12834 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:09:47.018227   12834 start.go:729] Will try again in 5 seconds ...
	I1025 16:09:52.020426   12834 start.go:360] acquireMachinesLock for force-systemd-env-462000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:09:52.200913   12834 start.go:364] duration metric: took 180.363208ms to acquireMachinesLock for "force-systemd-env-462000"
	I1025 16:09:52.201068   12834 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-462000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-462000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:09:52.201376   12834 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:09:52.214991   12834 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1025 16:09:52.263466   12834 start.go:159] libmachine.API.Create for "force-systemd-env-462000" (driver="qemu2")
	I1025 16:09:52.263517   12834 client.go:168] LocalClient.Create starting
	I1025 16:09:52.263687   12834 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:09:52.263765   12834 main.go:141] libmachine: Decoding PEM data...
	I1025 16:09:52.263784   12834 main.go:141] libmachine: Parsing certificate...
	I1025 16:09:52.263843   12834 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:09:52.263902   12834 main.go:141] libmachine: Decoding PEM data...
	I1025 16:09:52.263912   12834 main.go:141] libmachine: Parsing certificate...
	I1025 16:09:52.264561   12834 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:09:52.440248   12834 main.go:141] libmachine: Creating SSH key...
	I1025 16:09:52.521683   12834 main.go:141] libmachine: Creating Disk image...
	I1025 16:09:52.521689   12834 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:09:52.521906   12834 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/force-systemd-env-462000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/force-systemd-env-462000/disk.qcow2
	I1025 16:09:52.532130   12834 main.go:141] libmachine: STDOUT: 
	I1025 16:09:52.532195   12834 main.go:141] libmachine: STDERR: 
	I1025 16:09:52.532259   12834 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/force-systemd-env-462000/disk.qcow2 +20000M
	I1025 16:09:52.540762   12834 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:09:52.540821   12834 main.go:141] libmachine: STDERR: 
	I1025 16:09:52.540835   12834 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/force-systemd-env-462000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/force-systemd-env-462000/disk.qcow2
	I1025 16:09:52.540842   12834 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:09:52.540853   12834 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:09:52.540886   12834 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/force-systemd-env-462000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/force-systemd-env-462000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/force-systemd-env-462000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:05:4d:81:c6:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/force-systemd-env-462000/disk.qcow2
	I1025 16:09:52.542802   12834 main.go:141] libmachine: STDOUT: 
	I1025 16:09:52.542856   12834 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:09:52.542869   12834 client.go:171] duration metric: took 279.348833ms to LocalClient.Create
	I1025 16:09:54.545034   12834 start.go:128] duration metric: took 2.343644125s to createHost
	I1025 16:09:54.545093   12834 start.go:83] releasing machines lock for "force-systemd-env-462000", held for 2.344168375s
	W1025 16:09:54.545460   12834 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-462000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-462000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:09:54.560020   12834 out.go:201] 
	W1025 16:09:54.571363   12834 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:09:54.571398   12834 out.go:270] * 
	* 
	W1025 16:09:54.573867   12834 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 16:09:54.580957   12834 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-462000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-462000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-462000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (89.704625ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-462000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-462000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-462000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-10-25 16:09:54.689117 -0700 PDT m=+717.945797959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-462000 -n force-systemd-env-462000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-462000 -n force-systemd-env-462000: exit status 7 (34.505417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-462000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-462000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-462000
--- FAIL: TestForceSystemdEnv (10.22s)

                                                
                                    
x
+
TestErrorSpam/setup (9.9s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-870000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-870000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 --driver=qemu2 : exit status 80 (9.89859225s)

                                                
                                                
-- stdout --
	* [nospam-870000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-870000" primary control-plane node in "nospam-870000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-870000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-870000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-870000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-870000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-870000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
- MINIKUBE_LOCATION=19758
- KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-870000" primary control-plane node in "nospam-870000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-870000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-870000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.90s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.93s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-543000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-543000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.855568s)

                                                
                                                
-- stdout --
	* [functional-543000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-543000" primary control-plane node in "functional-543000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-543000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:61978 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:61978 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:61978 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-543000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2236: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-543000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2241: start stdout=* [functional-543000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
- MINIKUBE_LOCATION=19758
- KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-543000" primary control-plane node in "functional-543000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-543000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2246: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:61978 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:61978 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:61978 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-543000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-543000 -n functional-543000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-543000 -n functional-543000: exit status 7 (73.407625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-543000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.93s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.27s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1025 15:59:00.607688   10998 config.go:182] Loaded profile config "functional-543000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-543000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-543000 --alsologtostderr -v=8: exit status 80 (5.195232167s)

                                                
                                                
-- stdout --
	* [functional-543000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-543000" primary control-plane node in "functional-543000" cluster
	* Restarting existing qemu2 VM for "functional-543000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-543000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 15:59:00.641486   11215 out.go:345] Setting OutFile to fd 1 ...
	I1025 15:59:00.641653   11215 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 15:59:00.641657   11215 out.go:358] Setting ErrFile to fd 2...
	I1025 15:59:00.641659   11215 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 15:59:00.641804   11215 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 15:59:00.642886   11215 out.go:352] Setting JSON to false
	I1025 15:59:00.660442   11215 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6378,"bootTime":1729890762,"procs":502,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 15:59:00.660521   11215 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 15:59:00.666061   11215 out.go:177] * [functional-543000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 15:59:00.673112   11215 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 15:59:00.673131   11215 notify.go:220] Checking for updates...
	I1025 15:59:00.681056   11215 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 15:59:00.685119   11215 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 15:59:00.688082   11215 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 15:59:00.691128   11215 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	I1025 15:59:00.694069   11215 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 15:59:00.697254   11215 config.go:182] Loaded profile config "functional-543000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 15:59:00.697309   11215 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 15:59:00.702008   11215 out.go:177] * Using the qemu2 driver based on existing profile
	I1025 15:59:00.708988   11215 start.go:297] selected driver: qemu2
	I1025 15:59:00.708994   11215 start.go:901] validating driver "qemu2" against &{Name:functional-543000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-543000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 15:59:00.709036   11215 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 15:59:00.711601   11215 cni.go:84] Creating CNI manager for ""
	I1025 15:59:00.711646   11215 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 15:59:00.711705   11215 start.go:340] cluster config:
	{Name:functional-543000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-543000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 15:59:00.716256   11215 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 15:59:00.722875   11215 out.go:177] * Starting "functional-543000" primary control-plane node in "functional-543000" cluster
	I1025 15:59:00.727079   11215 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1025 15:59:00.727096   11215 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1025 15:59:00.727103   11215 cache.go:56] Caching tarball of preloaded images
	I1025 15:59:00.727176   11215 preload.go:172] Found /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 15:59:00.727181   11215 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1025 15:59:00.727250   11215 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/functional-543000/config.json ...
	I1025 15:59:00.727732   11215 start.go:360] acquireMachinesLock for functional-543000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 15:59:00.727765   11215 start.go:364] duration metric: took 26.625µs to acquireMachinesLock for "functional-543000"
	I1025 15:59:00.727775   11215 start.go:96] Skipping create...Using existing machine configuration
	I1025 15:59:00.727779   11215 fix.go:54] fixHost starting: 
	I1025 15:59:00.727914   11215 fix.go:112] recreateIfNeeded on functional-543000: state=Stopped err=<nil>
	W1025 15:59:00.727923   11215 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 15:59:00.736028   11215 out.go:177] * Restarting existing qemu2 VM for "functional-543000" ...
	I1025 15:59:00.740059   11215 qemu.go:418] Using hvf for hardware acceleration
	I1025 15:59:00.740093   11215 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/functional-543000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/functional-543000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/functional-543000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:6b:24:29:23:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/functional-543000/disk.qcow2
	I1025 15:59:00.742468   11215 main.go:141] libmachine: STDOUT: 
	I1025 15:59:00.742499   11215 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 15:59:00.742528   11215 fix.go:56] duration metric: took 14.746291ms for fixHost
	I1025 15:59:00.742534   11215 start.go:83] releasing machines lock for "functional-543000", held for 14.764083ms
	W1025 15:59:00.742540   11215 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 15:59:00.742596   11215 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 15:59:00.742602   11215 start.go:729] Will try again in 5 seconds ...
	I1025 15:59:05.744684   11215 start.go:360] acquireMachinesLock for functional-543000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 15:59:05.745230   11215 start.go:364] duration metric: took 433.834µs to acquireMachinesLock for "functional-543000"
	I1025 15:59:05.745412   11215 start.go:96] Skipping create...Using existing machine configuration
	I1025 15:59:05.745429   11215 fix.go:54] fixHost starting: 
	I1025 15:59:05.746263   11215 fix.go:112] recreateIfNeeded on functional-543000: state=Stopped err=<nil>
	W1025 15:59:05.746289   11215 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 15:59:05.749781   11215 out.go:177] * Restarting existing qemu2 VM for "functional-543000" ...
	I1025 15:59:05.757653   11215 qemu.go:418] Using hvf for hardware acceleration
	I1025 15:59:05.757897   11215 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/functional-543000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/functional-543000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/functional-543000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:6b:24:29:23:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/functional-543000/disk.qcow2
	I1025 15:59:05.767335   11215 main.go:141] libmachine: STDOUT: 
	I1025 15:59:05.767403   11215 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 15:59:05.767464   11215 fix.go:56] duration metric: took 22.033083ms for fixHost
	I1025 15:59:05.767483   11215 start.go:83] releasing machines lock for "functional-543000", held for 22.174917ms
	W1025 15:59:05.767706   11215 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-543000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-543000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 15:59:05.774628   11215 out.go:201] 
	W1025 15:59:05.778664   11215 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 15:59:05.778692   11215 out.go:270] * 
	* 
	W1025 15:59:05.781516   11215 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 15:59:05.789555   11215 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:661: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-543000 --alsologtostderr -v=8": exit status 80
functional_test.go:663: soft start took 5.197000958s for "functional-543000" cluster.
I1025 15:59:05.804901   10998 config.go:182] Loaded profile config "functional-543000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-543000 -n functional-543000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-543000 -n functional-543000: exit status 7 (73.317333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-543000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.27s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
functional_test.go:681: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (28.417875ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:683: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:687: expected current-context = "functional-543000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-543000 -n functional-543000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-543000 -n functional-543000: exit status 7 (34.835042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-543000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-543000 get po -A
functional_test.go:696: (dbg) Non-zero exit: kubectl --context functional-543000 get po -A: exit status 1 (26.544042ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-543000

                                                
                                                
** /stderr **
functional_test.go:698: failed to get kubectl pods: args "kubectl --context functional-543000 get po -A" : exit status 1
functional_test.go:702: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-543000\n"*: args "kubectl --context functional-543000 get po -A"
functional_test.go:705: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-543000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-543000 -n functional-543000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-543000 -n functional-543000: exit status 7 (34.379667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-543000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 ssh sudo crictl images
functional_test.go:1124: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 ssh sudo crictl images: exit status 83 (47.84675ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
functional_test.go:1126: failed to get images by "out/minikube-darwin-arm64 -p functional-543000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1130: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1147: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (43.757708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
functional_test.go:1150: failed to manually delete image "out/minikube-darwin-arm64 -p functional-543000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (46.7665ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1163: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (47.819542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
functional_test.go:1165: expected "out/minikube-darwin-arm64 -p functional-543000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 kubectl -- --context functional-543000 get pods
functional_test.go:716: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 kubectl -- --context functional-543000 get pods: exit status 1 (710.660291ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-543000
	* no server found for cluster "functional-543000"

                                                
                                                
** /stderr **
functional_test.go:719: failed to get pods. args "out/minikube-darwin-arm64 -p functional-543000 kubectl -- --context functional-543000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-543000 -n functional-543000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-543000 -n functional-543000: exit status 7 (36.406833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-543000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.75s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-543000 get pods
functional_test.go:741: (dbg) Non-zero exit: out/kubectl --context functional-543000 get pods: exit status 1 (1.154595167s)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-543000
	* no server found for cluster "functional-543000"

                                                
                                                
** /stderr **
functional_test.go:744: failed to run kubectl directly. args "out/kubectl --context functional-543000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-543000 -n functional-543000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-543000 -n functional-543000: exit status 7 (33.658ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-543000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.19s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.27s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-543000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-543000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.190531333s)

                                                
                                                
-- stdout --
	* [functional-543000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-543000" primary control-plane node in "functional-543000" cluster
	* Restarting existing qemu2 VM for "functional-543000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-543000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-543000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:759: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-543000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:761: restart took 5.191608083s for "functional-543000" cluster.
I1025 15:59:16.551544   10998 config.go:182] Loaded profile config "functional-543000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-543000 -n functional-543000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-543000 -n functional-543000: exit status 7 (73.294667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-543000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.27s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-543000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:810: (dbg) Non-zero exit: kubectl --context functional-543000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (28.543333ms)

                                                
                                                
** stderr ** 
	error: context "functional-543000" does not exist

                                                
                                                
** /stderr **
functional_test.go:812: failed to get components. args "kubectl --context functional-543000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-543000 -n functional-543000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-543000 -n functional-543000: exit status 7 (34.778542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-543000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 logs
functional_test.go:1236: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 logs: exit status 83 (82.644125ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-826000 | jenkins | v1.34.0 | 25 Oct 24 15:57 PDT |                     |
	|         | -p download-only-826000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT | 25 Oct 24 15:58 PDT |
	| delete  | -p download-only-826000                                                  | download-only-826000 | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT | 25 Oct 24 15:58 PDT |
	| start   | -o=json --download-only                                                  | download-only-831000 | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
	|         | -p download-only-831000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT | 25 Oct 24 15:58 PDT |
	| delete  | -p download-only-831000                                                  | download-only-831000 | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT | 25 Oct 24 15:58 PDT |
	| delete  | -p download-only-826000                                                  | download-only-826000 | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT | 25 Oct 24 15:58 PDT |
	| delete  | -p download-only-831000                                                  | download-only-831000 | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT | 25 Oct 24 15:58 PDT |
	| start   | --download-only -p                                                       | binary-mirror-869000 | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
	|         | binary-mirror-869000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:61946                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-869000                                                  | binary-mirror-869000 | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT | 25 Oct 24 15:58 PDT |
	| addons  | enable dashboard -p                                                      | addons-362000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
	|         | addons-362000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-362000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
	|         | addons-362000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-362000 --wait=true                                             | addons-362000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	| delete  | -p addons-362000                                                         | addons-362000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT | 25 Oct 24 15:58 PDT |
	| start   | -p nospam-870000 -n=1 --memory=2250 --wait=false                         | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-870000 --log_dir                                                  | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-870000 --log_dir                                                  | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-870000 --log_dir                                                  | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-870000 --log_dir                                                  | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-870000 --log_dir                                                  | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-870000 --log_dir                                                  | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-870000 --log_dir                                                  | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-870000 --log_dir                                                  | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-870000 --log_dir                                                  | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-870000 --log_dir                                                  | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT | 25 Oct 24 15:58 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-870000 --log_dir                                                  | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT | 25 Oct 24 15:58 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-870000 --log_dir                                                  | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT | 25 Oct 24 15:58 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-870000                                                         | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT | 25 Oct 24 15:58 PDT |
	| start   | -p functional-543000                                                     | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-543000                                                     | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-543000 cache add                                              | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT | 25 Oct 24 15:59 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-543000 cache add                                              | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT | 25 Oct 24 15:59 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-543000 cache add                                              | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT | 25 Oct 24 15:59 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-543000 cache add                                              | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT | 25 Oct 24 15:59 PDT |
	|         | minikube-local-cache-test:functional-543000                              |                      |         |         |                     |                     |
	| cache   | functional-543000 cache delete                                           | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT | 25 Oct 24 15:59 PDT |
	|         | minikube-local-cache-test:functional-543000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT | 25 Oct 24 15:59 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT | 25 Oct 24 15:59 PDT |
	| ssh     | functional-543000 ssh sudo                                               | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-543000                                                        | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-543000 ssh                                                    | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-543000 cache reload                                           | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT | 25 Oct 24 15:59 PDT |
	| ssh     | functional-543000 ssh                                                    | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT | 25 Oct 24 15:59 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT | 25 Oct 24 15:59 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-543000 kubectl --                                             | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT |                     |
	|         | --context functional-543000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-543000                                                     | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/25 15:59:11
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 15:59:11.390922   11292 out.go:345] Setting OutFile to fd 1 ...
	I1025 15:59:11.391073   11292 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 15:59:11.391075   11292 out.go:358] Setting ErrFile to fd 2...
	I1025 15:59:11.391076   11292 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 15:59:11.391185   11292 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 15:59:11.392333   11292 out.go:352] Setting JSON to false
	I1025 15:59:11.409666   11292 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6389,"bootTime":1729890762,"procs":504,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 15:59:11.409729   11292 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 15:59:11.414475   11292 out.go:177] * [functional-543000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 15:59:11.421375   11292 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 15:59:11.421407   11292 notify.go:220] Checking for updates...
	I1025 15:59:11.428334   11292 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 15:59:11.431326   11292 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 15:59:11.434408   11292 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 15:59:11.437382   11292 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	I1025 15:59:11.440361   11292 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 15:59:11.443705   11292 config.go:182] Loaded profile config "functional-543000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 15:59:11.443757   11292 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 15:59:11.448233   11292 out.go:177] * Using the qemu2 driver based on existing profile
	I1025 15:59:11.455331   11292 start.go:297] selected driver: qemu2
	I1025 15:59:11.455335   11292 start.go:901] validating driver "qemu2" against &{Name:functional-543000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-543000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 15:59:11.455402   11292 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 15:59:11.457937   11292 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 15:59:11.457960   11292 cni.go:84] Creating CNI manager for ""
	I1025 15:59:11.457992   11292 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 15:59:11.458050   11292 start.go:340] cluster config:
	{Name:functional-543000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-543000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 15:59:11.462587   11292 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 15:59:11.469292   11292 out.go:177] * Starting "functional-543000" primary control-plane node in "functional-543000" cluster
	I1025 15:59:11.473218   11292 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1025 15:59:11.473232   11292 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1025 15:59:11.473240   11292 cache.go:56] Caching tarball of preloaded images
	I1025 15:59:11.473314   11292 preload.go:172] Found /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 15:59:11.473318   11292 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1025 15:59:11.473374   11292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/functional-543000/config.json ...
	I1025 15:59:11.473776   11292 start.go:360] acquireMachinesLock for functional-543000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 15:59:11.473820   11292 start.go:364] duration metric: took 40.083µs to acquireMachinesLock for "functional-543000"
	I1025 15:59:11.473827   11292 start.go:96] Skipping create...Using existing machine configuration
	I1025 15:59:11.473831   11292 fix.go:54] fixHost starting: 
	I1025 15:59:11.473942   11292 fix.go:112] recreateIfNeeded on functional-543000: state=Stopped err=<nil>
	W1025 15:59:11.473948   11292 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 15:59:11.482339   11292 out.go:177] * Restarting existing qemu2 VM for "functional-543000" ...
	I1025 15:59:11.486297   11292 qemu.go:418] Using hvf for hardware acceleration
	I1025 15:59:11.486332   11292 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/functional-543000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/functional-543000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/functional-543000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:6b:24:29:23:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/functional-543000/disk.qcow2
	I1025 15:59:11.488574   11292 main.go:141] libmachine: STDOUT: 
	I1025 15:59:11.488591   11292 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 15:59:11.488627   11292 fix.go:56] duration metric: took 14.796041ms for fixHost
	I1025 15:59:11.488630   11292 start.go:83] releasing machines lock for "functional-543000", held for 14.806833ms
	W1025 15:59:11.488634   11292 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 15:59:11.488671   11292 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 15:59:11.488675   11292 start.go:729] Will try again in 5 seconds ...
	I1025 15:59:16.490797   11292 start.go:360] acquireMachinesLock for functional-543000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 15:59:16.491189   11292 start.go:364] duration metric: took 324.042µs to acquireMachinesLock for "functional-543000"
	I1025 15:59:16.491272   11292 start.go:96] Skipping create...Using existing machine configuration
	I1025 15:59:16.491283   11292 fix.go:54] fixHost starting: 
	I1025 15:59:16.491985   11292 fix.go:112] recreateIfNeeded on functional-543000: state=Stopped err=<nil>
	W1025 15:59:16.492007   11292 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 15:59:16.499405   11292 out.go:177] * Restarting existing qemu2 VM for "functional-543000" ...
	I1025 15:59:16.503380   11292 qemu.go:418] Using hvf for hardware acceleration
	I1025 15:59:16.503602   11292 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/functional-543000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/functional-543000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/functional-543000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:6b:24:29:23:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/functional-543000/disk.qcow2
	I1025 15:59:16.513167   11292 main.go:141] libmachine: STDOUT: 
	I1025 15:59:16.513211   11292 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 15:59:16.513291   11292 fix.go:56] duration metric: took 22.009959ms for fixHost
	I1025 15:59:16.513303   11292 start.go:83] releasing machines lock for "functional-543000", held for 22.102167ms
	W1025 15:59:16.513444   11292 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-543000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 15:59:16.521287   11292 out.go:201] 
	W1025 15:59:16.525524   11292 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 15:59:16.525544   11292 out.go:270] * 
	W1025 15:59:16.528128   11292 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 15:59:16.536425   11292 out.go:201] 
	
	
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
functional_test.go:1238: out/minikube-darwin-arm64 -p functional-543000 logs failed: exit status 83
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-826000 | jenkins | v1.34.0 | 25 Oct 24 15:57 PDT |                     |
|         | -p download-only-826000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT | 25 Oct 24 15:58 PDT |
| delete  | -p download-only-826000                                                  | download-only-826000 | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT | 25 Oct 24 15:58 PDT |
| start   | -o=json --download-only                                                  | download-only-831000 | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
|         | -p download-only-831000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.1                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT | 25 Oct 24 15:58 PDT |
| delete  | -p download-only-831000                                                  | download-only-831000 | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT | 25 Oct 24 15:58 PDT |
| delete  | -p download-only-826000                                                  | download-only-826000 | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT | 25 Oct 24 15:58 PDT |
| delete  | -p download-only-831000                                                  | download-only-831000 | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT | 25 Oct 24 15:58 PDT |
| start   | --download-only -p                                                       | binary-mirror-869000 | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
|         | binary-mirror-869000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:61946                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-869000                                                  | binary-mirror-869000 | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT | 25 Oct 24 15:58 PDT |
| addons  | enable dashboard -p                                                      | addons-362000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
|         | addons-362000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-362000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
|         | addons-362000                                                            |                      |         |         |                     |                     |
| start   | -p addons-362000 --wait=true                                             | addons-362000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --addons=amd-gpu-device-plugin                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
| delete  | -p addons-362000                                                         | addons-362000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT | 25 Oct 24 15:58 PDT |
| start   | -p nospam-870000 -n=1 --memory=2250 --wait=false                         | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-870000 --log_dir                                                  | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-870000 --log_dir                                                  | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-870000 --log_dir                                                  | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-870000 --log_dir                                                  | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-870000 --log_dir                                                  | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-870000 --log_dir                                                  | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-870000 --log_dir                                                  | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-870000 --log_dir                                                  | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-870000 --log_dir                                                  | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-870000 --log_dir                                                  | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT | 25 Oct 24 15:58 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-870000 --log_dir                                                  | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT | 25 Oct 24 15:58 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-870000 --log_dir                                                  | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT | 25 Oct 24 15:58 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-870000                                                         | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT | 25 Oct 24 15:58 PDT |
| start   | -p functional-543000                                                     | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-543000                                                     | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-543000 cache add                                              | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT | 25 Oct 24 15:59 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-543000 cache add                                              | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT | 25 Oct 24 15:59 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-543000 cache add                                              | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT | 25 Oct 24 15:59 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-543000 cache add                                              | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT | 25 Oct 24 15:59 PDT |
|         | minikube-local-cache-test:functional-543000                              |                      |         |         |                     |                     |
| cache   | functional-543000 cache delete                                           | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT | 25 Oct 24 15:59 PDT |
|         | minikube-local-cache-test:functional-543000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT | 25 Oct 24 15:59 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT | 25 Oct 24 15:59 PDT |
| ssh     | functional-543000 ssh sudo                                               | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-543000                                                        | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-543000 ssh                                                    | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-543000 cache reload                                           | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT | 25 Oct 24 15:59 PDT |
| ssh     | functional-543000 ssh                                                    | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT | 25 Oct 24 15:59 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT | 25 Oct 24 15:59 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-543000 kubectl --                                             | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT |                     |
|         | --context functional-543000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-543000                                                     | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/10/25 15:59:11
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.23.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1025 15:59:11.390922   11292 out.go:345] Setting OutFile to fd 1 ...
I1025 15:59:11.391073   11292 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 15:59:11.391075   11292 out.go:358] Setting ErrFile to fd 2...
I1025 15:59:11.391076   11292 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 15:59:11.391185   11292 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
I1025 15:59:11.392333   11292 out.go:352] Setting JSON to false
I1025 15:59:11.409666   11292 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6389,"bootTime":1729890762,"procs":504,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W1025 15:59:11.409729   11292 start.go:137] gopshost.Virtualization returned error: not implemented yet
I1025 15:59:11.414475   11292 out.go:177] * [functional-543000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
I1025 15:59:11.421375   11292 out.go:177]   - MINIKUBE_LOCATION=19758
I1025 15:59:11.421407   11292 notify.go:220] Checking for updates...
I1025 15:59:11.428334   11292 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
I1025 15:59:11.431326   11292 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I1025 15:59:11.434408   11292 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1025 15:59:11.437382   11292 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
I1025 15:59:11.440361   11292 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I1025 15:59:11.443705   11292 config.go:182] Loaded profile config "functional-543000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1025 15:59:11.443757   11292 driver.go:394] Setting default libvirt URI to qemu:///system
I1025 15:59:11.448233   11292 out.go:177] * Using the qemu2 driver based on existing profile
I1025 15:59:11.455331   11292 start.go:297] selected driver: qemu2
I1025 15:59:11.455335   11292 start.go:901] validating driver "qemu2" against &{Name:functional-543000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:functional-543000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1025 15:59:11.455402   11292 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1025 15:59:11.457937   11292 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1025 15:59:11.457960   11292 cni.go:84] Creating CNI manager for ""
I1025 15:59:11.457992   11292 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1025 15:59:11.458050   11292 start.go:340] cluster config:
{Name:functional-543000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-543000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1025 15:59:11.462587   11292 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1025 15:59:11.469292   11292 out.go:177] * Starting "functional-543000" primary control-plane node in "functional-543000" cluster
I1025 15:59:11.473218   11292 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I1025 15:59:11.473232   11292 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I1025 15:59:11.473240   11292 cache.go:56] Caching tarball of preloaded images
I1025 15:59:11.473314   11292 preload.go:172] Found /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I1025 15:59:11.473318   11292 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I1025 15:59:11.473374   11292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/functional-543000/config.json ...
I1025 15:59:11.473776   11292 start.go:360] acquireMachinesLock for functional-543000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1025 15:59:11.473820   11292 start.go:364] duration metric: took 40.083µs to acquireMachinesLock for "functional-543000"
I1025 15:59:11.473827   11292 start.go:96] Skipping create...Using existing machine configuration
I1025 15:59:11.473831   11292 fix.go:54] fixHost starting: 
I1025 15:59:11.473942   11292 fix.go:112] recreateIfNeeded on functional-543000: state=Stopped err=<nil>
W1025 15:59:11.473948   11292 fix.go:138] unexpected machine state, will restart: <nil>
I1025 15:59:11.482339   11292 out.go:177] * Restarting existing qemu2 VM for "functional-543000" ...
I1025 15:59:11.486297   11292 qemu.go:418] Using hvf for hardware acceleration
I1025 15:59:11.486332   11292 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/functional-543000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/functional-543000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/functional-543000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:6b:24:29:23:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/functional-543000/disk.qcow2
I1025 15:59:11.488574   11292 main.go:141] libmachine: STDOUT: 
I1025 15:59:11.488591   11292 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1025 15:59:11.488627   11292 fix.go:56] duration metric: took 14.796041ms for fixHost
I1025 15:59:11.488630   11292 start.go:83] releasing machines lock for "functional-543000", held for 14.806833ms
W1025 15:59:11.488634   11292 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1025 15:59:11.488671   11292 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1025 15:59:11.488675   11292 start.go:729] Will try again in 5 seconds ...
I1025 15:59:16.490797   11292 start.go:360] acquireMachinesLock for functional-543000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1025 15:59:16.491189   11292 start.go:364] duration metric: took 324.042µs to acquireMachinesLock for "functional-543000"
I1025 15:59:16.491272   11292 start.go:96] Skipping create...Using existing machine configuration
I1025 15:59:16.491283   11292 fix.go:54] fixHost starting: 
I1025 15:59:16.491985   11292 fix.go:112] recreateIfNeeded on functional-543000: state=Stopped err=<nil>
W1025 15:59:16.492007   11292 fix.go:138] unexpected machine state, will restart: <nil>
I1025 15:59:16.499405   11292 out.go:177] * Restarting existing qemu2 VM for "functional-543000" ...
I1025 15:59:16.503380   11292 qemu.go:418] Using hvf for hardware acceleration
I1025 15:59:16.503602   11292 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/functional-543000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/functional-543000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/functional-543000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:6b:24:29:23:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/functional-543000/disk.qcow2
I1025 15:59:16.513167   11292 main.go:141] libmachine: STDOUT: 
I1025 15:59:16.513211   11292 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1025 15:59:16.513291   11292 fix.go:56] duration metric: took 22.009959ms for fixHost
I1025 15:59:16.513303   11292 start.go:83] releasing machines lock for "functional-543000", held for 22.102167ms
W1025 15:59:16.513444   11292 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-543000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1025 15:59:16.521287   11292 out.go:201] 
W1025 15:59:16.525524   11292 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1025 15:59:16.525544   11292 out.go:270] * 
W1025 15:59:16.528128   11292 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1025 15:59:16.536425   11292 out.go:201] 

                                                
                                                

                                                
                                                
* The control-plane node functional-543000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-543000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd1283093255/001/logs.txt
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-826000 | jenkins | v1.34.0 | 25 Oct 24 15:57 PDT |                     |
|         | -p download-only-826000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT | 25 Oct 24 15:58 PDT |
| delete  | -p download-only-826000                                                  | download-only-826000 | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT | 25 Oct 24 15:58 PDT |
| start   | -o=json --download-only                                                  | download-only-831000 | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
|         | -p download-only-831000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.1                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT | 25 Oct 24 15:58 PDT |
| delete  | -p download-only-831000                                                  | download-only-831000 | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT | 25 Oct 24 15:58 PDT |
| delete  | -p download-only-826000                                                  | download-only-826000 | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT | 25 Oct 24 15:58 PDT |
| delete  | -p download-only-831000                                                  | download-only-831000 | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT | 25 Oct 24 15:58 PDT |
| start   | --download-only -p                                                       | binary-mirror-869000 | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
|         | binary-mirror-869000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:61946                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-869000                                                  | binary-mirror-869000 | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT | 25 Oct 24 15:58 PDT |
| addons  | enable dashboard -p                                                      | addons-362000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
|         | addons-362000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-362000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
|         | addons-362000                                                            |                      |         |         |                     |                     |
| start   | -p addons-362000 --wait=true                                             | addons-362000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --addons=amd-gpu-device-plugin                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
| delete  | -p addons-362000                                                         | addons-362000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT | 25 Oct 24 15:58 PDT |
| start   | -p nospam-870000 -n=1 --memory=2250 --wait=false                         | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-870000 --log_dir                                                  | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-870000 --log_dir                                                  | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-870000 --log_dir                                                  | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-870000 --log_dir                                                  | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-870000 --log_dir                                                  | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-870000 --log_dir                                                  | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-870000 --log_dir                                                  | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-870000 --log_dir                                                  | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-870000 --log_dir                                                  | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-870000 --log_dir                                                  | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT | 25 Oct 24 15:58 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-870000 --log_dir                                                  | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT | 25 Oct 24 15:58 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-870000 --log_dir                                                  | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT | 25 Oct 24 15:58 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-870000                                                         | nospam-870000        | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT | 25 Oct 24 15:58 PDT |
| start   | -p functional-543000                                                     | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-543000                                                     | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-543000 cache add                                              | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT | 25 Oct 24 15:59 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-543000 cache add                                              | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT | 25 Oct 24 15:59 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-543000 cache add                                              | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT | 25 Oct 24 15:59 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-543000 cache add                                              | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT | 25 Oct 24 15:59 PDT |
|         | minikube-local-cache-test:functional-543000                              |                      |         |         |                     |                     |
| cache   | functional-543000 cache delete                                           | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT | 25 Oct 24 15:59 PDT |
|         | minikube-local-cache-test:functional-543000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT | 25 Oct 24 15:59 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT | 25 Oct 24 15:59 PDT |
| ssh     | functional-543000 ssh sudo                                               | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-543000                                                        | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-543000 ssh                                                    | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-543000 cache reload                                           | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT | 25 Oct 24 15:59 PDT |
| ssh     | functional-543000 ssh                                                    | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT | 25 Oct 24 15:59 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT | 25 Oct 24 15:59 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-543000 kubectl --                                             | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT |                     |
|         | --context functional-543000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-543000                                                     | functional-543000    | jenkins | v1.34.0 | 25 Oct 24 15:59 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/10/25 15:59:11
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.23.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1025 15:59:11.390922   11292 out.go:345] Setting OutFile to fd 1 ...
I1025 15:59:11.391073   11292 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 15:59:11.391075   11292 out.go:358] Setting ErrFile to fd 2...
I1025 15:59:11.391076   11292 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 15:59:11.391185   11292 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
I1025 15:59:11.392333   11292 out.go:352] Setting JSON to false
I1025 15:59:11.409666   11292 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6389,"bootTime":1729890762,"procs":504,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W1025 15:59:11.409729   11292 start.go:137] gopshost.Virtualization returned error: not implemented yet
I1025 15:59:11.414475   11292 out.go:177] * [functional-543000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
I1025 15:59:11.421375   11292 out.go:177]   - MINIKUBE_LOCATION=19758
I1025 15:59:11.421407   11292 notify.go:220] Checking for updates...
I1025 15:59:11.428334   11292 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
I1025 15:59:11.431326   11292 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I1025 15:59:11.434408   11292 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1025 15:59:11.437382   11292 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
I1025 15:59:11.440361   11292 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I1025 15:59:11.443705   11292 config.go:182] Loaded profile config "functional-543000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1025 15:59:11.443757   11292 driver.go:394] Setting default libvirt URI to qemu:///system
I1025 15:59:11.448233   11292 out.go:177] * Using the qemu2 driver based on existing profile
I1025 15:59:11.455331   11292 start.go:297] selected driver: qemu2
I1025 15:59:11.455335   11292 start.go:901] validating driver "qemu2" against &{Name:functional-543000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:functional-543000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1025 15:59:11.455402   11292 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1025 15:59:11.457937   11292 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1025 15:59:11.457960   11292 cni.go:84] Creating CNI manager for ""
I1025 15:59:11.457992   11292 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1025 15:59:11.458050   11292 start.go:340] cluster config:
{Name:functional-543000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-543000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1025 15:59:11.462587   11292 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1025 15:59:11.469292   11292 out.go:177] * Starting "functional-543000" primary control-plane node in "functional-543000" cluster
I1025 15:59:11.473218   11292 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I1025 15:59:11.473232   11292 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I1025 15:59:11.473240   11292 cache.go:56] Caching tarball of preloaded images
I1025 15:59:11.473314   11292 preload.go:172] Found /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I1025 15:59:11.473318   11292 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I1025 15:59:11.473374   11292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/functional-543000/config.json ...
I1025 15:59:11.473776   11292 start.go:360] acquireMachinesLock for functional-543000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1025 15:59:11.473820   11292 start.go:364] duration metric: took 40.083µs to acquireMachinesLock for "functional-543000"
I1025 15:59:11.473827   11292 start.go:96] Skipping create...Using existing machine configuration
I1025 15:59:11.473831   11292 fix.go:54] fixHost starting: 
I1025 15:59:11.473942   11292 fix.go:112] recreateIfNeeded on functional-543000: state=Stopped err=<nil>
W1025 15:59:11.473948   11292 fix.go:138] unexpected machine state, will restart: <nil>
I1025 15:59:11.482339   11292 out.go:177] * Restarting existing qemu2 VM for "functional-543000" ...
I1025 15:59:11.486297   11292 qemu.go:418] Using hvf for hardware acceleration
I1025 15:59:11.486332   11292 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/functional-543000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/functional-543000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/functional-543000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:6b:24:29:23:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/functional-543000/disk.qcow2
I1025 15:59:11.488574   11292 main.go:141] libmachine: STDOUT: 
I1025 15:59:11.488591   11292 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1025 15:59:11.488627   11292 fix.go:56] duration metric: took 14.796041ms for fixHost
I1025 15:59:11.488630   11292 start.go:83] releasing machines lock for "functional-543000", held for 14.806833ms
W1025 15:59:11.488634   11292 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1025 15:59:11.488671   11292 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1025 15:59:11.488675   11292 start.go:729] Will try again in 5 seconds ...
I1025 15:59:16.490797   11292 start.go:360] acquireMachinesLock for functional-543000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1025 15:59:16.491189   11292 start.go:364] duration metric: took 324.042µs to acquireMachinesLock for "functional-543000"
I1025 15:59:16.491272   11292 start.go:96] Skipping create...Using existing machine configuration
I1025 15:59:16.491283   11292 fix.go:54] fixHost starting: 
I1025 15:59:16.491985   11292 fix.go:112] recreateIfNeeded on functional-543000: state=Stopped err=<nil>
W1025 15:59:16.492007   11292 fix.go:138] unexpected machine state, will restart: <nil>
I1025 15:59:16.499405   11292 out.go:177] * Restarting existing qemu2 VM for "functional-543000" ...
I1025 15:59:16.503380   11292 qemu.go:418] Using hvf for hardware acceleration
I1025 15:59:16.503602   11292 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/functional-543000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/functional-543000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/functional-543000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:6b:24:29:23:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/functional-543000/disk.qcow2
I1025 15:59:16.513167   11292 main.go:141] libmachine: STDOUT: 
I1025 15:59:16.513211   11292 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1025 15:59:16.513291   11292 fix.go:56] duration metric: took 22.009959ms for fixHost
I1025 15:59:16.513303   11292 start.go:83] releasing machines lock for "functional-543000", held for 22.102167ms
W1025 15:59:16.513444   11292 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-543000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1025 15:59:16.521287   11292 out.go:201] 
W1025 15:59:16.525524   11292 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1025 15:59:16.525544   11292 out.go:270] * 
W1025 15:59:16.528128   11292 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1025 15:59:16.536425   11292 out.go:201] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-543000 apply -f testdata/invalidsvc.yaml
functional_test.go:2321: (dbg) Non-zero exit: kubectl --context functional-543000 apply -f testdata/invalidsvc.yaml: exit status 1 (29.424792ms)

                                                
                                                
** stderr ** 
	error: context "functional-543000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2323: kubectl --context functional-543000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-543000 --alsologtostderr -v=1]
functional_test.go:918: output didn't produce a URL
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-543000 --alsologtostderr -v=1] ...
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-543000 --alsologtostderr -v=1] stdout:
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-543000 --alsologtostderr -v=1] stderr:
I1025 15:59:51.203543   11600 out.go:345] Setting OutFile to fd 1 ...
I1025 15:59:51.203973   11600 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 15:59:51.203977   11600 out.go:358] Setting ErrFile to fd 2...
I1025 15:59:51.203979   11600 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 15:59:51.204163   11600 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
I1025 15:59:51.204389   11600 mustload.go:65] Loading cluster: functional-543000
I1025 15:59:51.204619   11600 config.go:182] Loaded profile config "functional-543000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1025 15:59:51.208037   11600 out.go:177] * The control-plane node functional-543000 host is not running: state=Stopped
I1025 15:59:51.211959   11600 out.go:177]   To start a cluster, run: "minikube start -p functional-543000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-543000 -n functional-543000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-543000 -n functional-543000: exit status 7 (45.920917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-543000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 status
I1025 15:59:21.298462   10998 retry.go:31] will retry after 3.632428401s: Temporary Error: Get "http:": http: no Host in request URL
functional_test.go:854: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 status: exit status 7 (33.906ms)

                                                
                                                
-- stdout --
	functional-543000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:856: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-543000 status" : exit status 7
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:860: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (33.67225ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:862: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-543000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 status -o json
functional_test.go:872: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 status -o json: exit status 7 (33.89525ms)

                                                
                                                
-- stdout --
	{"Name":"functional-543000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:874: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-543000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-543000 -n functional-543000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-543000 -n functional-543000: exit status 7 (33.669833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-543000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-543000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1627: (dbg) Non-zero exit: kubectl --context functional-543000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.796542ms)

                                                
                                                
** stderr ** 
	error: context "functional-543000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1633: failed to create hello-node deployment with this command "kubectl --context functional-543000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-543000 describe po hello-node-connect
functional_test.go:1602: (dbg) Non-zero exit: kubectl --context functional-543000 describe po hello-node-connect: exit status 1 (26.643958ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-543000

                                                
                                                
** /stderr **
functional_test.go:1604: "kubectl --context functional-543000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1606: hello-node pod describe:
functional_test.go:1608: (dbg) Run:  kubectl --context functional-543000 logs -l app=hello-node-connect
functional_test.go:1608: (dbg) Non-zero exit: kubectl --context functional-543000 logs -l app=hello-node-connect: exit status 1 (26.662417ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-543000

                                                
                                                
** /stderr **
functional_test.go:1610: "kubectl --context functional-543000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1612: hello-node logs:
functional_test.go:1614: (dbg) Run:  kubectl --context functional-543000 describe svc hello-node-connect
functional_test.go:1614: (dbg) Non-zero exit: kubectl --context functional-543000 describe svc hello-node-connect: exit status 1 (26.842375ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-543000

                                                
                                                
** /stderr **
functional_test.go:1616: "kubectl --context functional-543000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1618: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-543000 -n functional-543000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-543000 -n functional-543000: exit status 7 (35.203583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-543000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-543000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-543000 -n functional-543000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-543000 -n functional-543000: exit status 7 (33.847542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-543000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 ssh "echo hello"
functional_test.go:1725: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 ssh "echo hello": exit status 83 (52.506875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
functional_test.go:1730: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-543000 ssh \"echo hello\"" : exit status 83
functional_test.go:1734: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-543000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-543000\"\n"*. args "out/minikube-darwin-arm64 -p functional-543000 ssh \"echo hello\""
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 ssh "cat /etc/hostname"
functional_test.go:1742: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 ssh "cat /etc/hostname": exit status 83 (53.899792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
functional_test.go:1748: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-543000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1752: expected minikube ssh command output to be -"functional-543000"- but got *"* The control-plane node functional-543000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-543000\"\n"*. args "out/minikube-darwin-arm64 -p functional-543000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-543000 -n functional-543000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-543000 -n functional-543000: exit status 7 (34.69875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-543000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (59.965ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-543000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 ssh -n functional-543000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 ssh -n functional-543000 "sudo cat /home/docker/cp-test.txt": exit status 83 (47.946459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-543000 ssh -n functional-543000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-543000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-543000\"\n",
  }, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 cp functional-543000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3055056829/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 cp functional-543000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3055056829/001/cp-test.txt: exit status 83 (45.40525ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-543000 cp functional-543000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3055056829/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 ssh -n functional-543000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 ssh -n functional-543000 "sudo cat /home/docker/cp-test.txt": exit status 83 (43.239209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-543000 ssh -n functional-543000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3055056829/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"* The control-plane node functional-543000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-543000\"\n",
+ 	"",
  )
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (47.796625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-543000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 ssh -n functional-543000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 ssh -n functional-543000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (48.632959ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-543000 ssh -n functional-543000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-543000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-543000\"\n",
  }, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/10998/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 ssh "sudo cat /etc/test/nested/copy/10998/hosts"
functional_test.go:1931: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 ssh "sudo cat /etc/test/nested/copy/10998/hosts": exit status 83 (45.343416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
functional_test.go:1933: out/minikube-darwin-arm64 -p functional-543000 ssh "sudo cat /etc/test/nested/copy/10998/hosts" failed: exit status 83
functional_test.go:1936: file sync test content: * The control-plane node functional-543000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-543000"
functional_test.go:1946: /etc/sync.test content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-543000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-543000\"\n",
  }, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-543000 -n functional-543000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-543000 -n functional-543000: exit status 7 (34.752334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-543000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/10998.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 ssh "sudo cat /etc/ssl/certs/10998.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 ssh "sudo cat /etc/ssl/certs/10998.pem": exit status 83 (45.347791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/10998.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-543000 ssh \"sudo cat /etc/ssl/certs/10998.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/10998.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-543000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-543000"
  	"""
  )
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/10998.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 ssh "sudo cat /usr/share/ca-certificates/10998.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 ssh "sudo cat /usr/share/ca-certificates/10998.pem": exit status 83 (50.695208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/usr/share/ca-certificates/10998.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-543000 ssh \"sudo cat /usr/share/ca-certificates/10998.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/10998.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-543000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-543000"
  	"""
  )
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (46.544167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-543000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-543000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-543000"
  	"""
  )
functional_test.go:1999: Checking for existence of /etc/ssl/certs/109982.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 ssh "sudo cat /etc/ssl/certs/109982.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 ssh "sudo cat /etc/ssl/certs/109982.pem": exit status 83 (47.614042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/109982.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-543000 ssh \"sudo cat /etc/ssl/certs/109982.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/109982.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-543000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-543000"
  	"""
  )
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/109982.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 ssh "sudo cat /usr/share/ca-certificates/109982.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 ssh "sudo cat /usr/share/ca-certificates/109982.pem": exit status 83 (44.750083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/usr/share/ca-certificates/109982.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-543000 ssh \"sudo cat /usr/share/ca-certificates/109982.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/109982.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-543000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-543000"
  	"""
  )
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (45.682458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-543000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-543000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-543000"
  	"""
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-543000 -n functional-543000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-543000 -n functional-543000: exit status 7 (35.084417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-543000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-543000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:219: (dbg) Non-zero exit: kubectl --context functional-543000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.882583ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-543000

                                                
                                                
** /stderr **
functional_test.go:221: failed to 'kubectl get nodes' with args "kubectl --context functional-543000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:227: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-543000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-543000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-543000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-543000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-543000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-543000 -n functional-543000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-543000 -n functional-543000: exit status 7 (34.229958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-543000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 ssh "sudo systemctl is-active crio": exit status 83 (50.404708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
functional_test.go:2030: output of 
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2033: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-543000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-543000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 version -o=json --components
functional_test.go:2270: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 version -o=json --components: exit status 83 (46.01625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
functional_test.go:2272: error version: exit status 83
functional_test.go:2277: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-543000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-543000"
functional_test.go:2277: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-543000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-543000"
functional_test.go:2277: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-543000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-543000"
functional_test.go:2277: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-543000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-543000"
functional_test.go:2277: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-543000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-543000"
functional_test.go:2277: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-543000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-543000"
functional_test.go:2277: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-543000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-543000"
functional_test.go:2277: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-543000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-543000"
functional_test.go:2277: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-543000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-543000"
functional_test.go:2277: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-543000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-543000"
--- FAIL: TestFunctional/parallel/Version/components (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-543000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-543000 image ls --format short --alsologtostderr:
I1025 15:59:51.638283   11615 out.go:345] Setting OutFile to fd 1 ...
I1025 15:59:51.638460   11615 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 15:59:51.638463   11615 out.go:358] Setting ErrFile to fd 2...
I1025 15:59:51.638465   11615 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 15:59:51.638606   11615 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
I1025 15:59:51.639016   11615 config.go:182] Loaded profile config "functional-543000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1025 15:59:51.639077   11615 config.go:182] Loaded profile config "functional-543000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:275: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-543000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-543000 image ls --format table --alsologtostderr:
I1025 15:59:51.888172   11627 out.go:345] Setting OutFile to fd 1 ...
I1025 15:59:51.888357   11627 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 15:59:51.888360   11627 out.go:358] Setting ErrFile to fd 2...
I1025 15:59:51.888363   11627 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 15:59:51.888498   11627 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
I1025 15:59:51.888937   11627 config.go:182] Loaded profile config "functional-543000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1025 15:59:51.889003   11627 config.go:182] Loaded profile config "functional-543000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:275: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
I1025 16:00:03.877623   10998 retry.go:31] will retry after 23.329292093s: Temporary Error: Get "http:": http: no Host in request URL
I1025 16:00:27.208909   10998 retry.go:31] will retry after 17.696416347s: Temporary Error: Get "http:": http: no Host in request URL
I1025 16:00:44.907413   10998 retry.go:31] will retry after 30.653656488s: Temporary Error: Get "http:": http: no Host in request URL
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-543000 image ls --format json --alsologtostderr:
[]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-543000 image ls --format json --alsologtostderr:
I1025 15:59:51.847621   11625 out.go:345] Setting OutFile to fd 1 ...
I1025 15:59:51.847807   11625 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 15:59:51.847811   11625 out.go:358] Setting ErrFile to fd 2...
I1025 15:59:51.847813   11625 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 15:59:51.847962   11625 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
I1025 15:59:51.848386   11625 config.go:182] Loaded profile config "functional-543000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1025 15:59:51.848450   11625 config.go:182] Loaded profile config "functional-543000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:275: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-543000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-543000 image ls --format yaml --alsologtostderr:
I1025 15:59:51.678704   11617 out.go:345] Setting OutFile to fd 1 ...
I1025 15:59:51.678921   11617 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 15:59:51.678924   11617 out.go:358] Setting ErrFile to fd 2...
I1025 15:59:51.678926   11617 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 15:59:51.679083   11617 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
I1025 15:59:51.679566   11617 config.go:182] Loaded profile config "functional-543000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1025 15:59:51.679626   11617 config.go:182] Loaded profile config "functional-543000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:275: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 ssh pgrep buildkitd: exit status 83 (45.917125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 image build -t localhost/my-image:functional-543000 testdata/build --alsologtostderr
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-543000 image build -t localhost/my-image:functional-543000 testdata/build --alsologtostderr:
I1025 15:59:51.765947   11621 out.go:345] Setting OutFile to fd 1 ...
I1025 15:59:51.766775   11621 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 15:59:51.766779   11621 out.go:358] Setting ErrFile to fd 2...
I1025 15:59:51.766782   11621 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 15:59:51.766934   11621 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
I1025 15:59:51.767349   11621 config.go:182] Loaded profile config "functional-543000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1025 15:59:51.767788   11621 config.go:182] Loaded profile config "functional-543000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1025 15:59:51.768015   11621 build_images.go:133] succeeded building to: 
I1025 15:59:51.768019   11621 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 image ls
functional_test.go:446: expected "localhost/my-image:functional-543000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-543000 docker-env) && out/minikube-darwin-arm64 status -p functional-543000"
functional_test.go:499: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-543000 docker-env) && out/minikube-darwin-arm64 status -p functional-543000": exit status 1 (49.689417ms)
functional_test.go:505: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 update-context --alsologtostderr -v=2: exit status 83 (46.732ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 15:59:51.496404   11609 out.go:345] Setting OutFile to fd 1 ...
	I1025 15:59:51.497256   11609 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 15:59:51.497259   11609 out.go:358] Setting ErrFile to fd 2...
	I1025 15:59:51.497262   11609 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 15:59:51.497392   11609 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 15:59:51.497590   11609 mustload.go:65] Loading cluster: functional-543000
	I1025 15:59:51.497768   11609 config.go:182] Loaded profile config "functional-543000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 15:59:51.501955   11609 out.go:177] * The control-plane node functional-543000 host is not running: state=Stopped
	I1025 15:59:51.505900   11609 out.go:177]   To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-543000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-543000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-543000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 update-context --alsologtostderr -v=2: exit status 83 (46.553083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 15:59:51.590712   11613 out.go:345] Setting OutFile to fd 1 ...
	I1025 15:59:51.590897   11613 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 15:59:51.590900   11613 out.go:358] Setting ErrFile to fd 2...
	I1025 15:59:51.590902   11613 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 15:59:51.591023   11613 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 15:59:51.591246   11613 mustload.go:65] Loading cluster: functional-543000
	I1025 15:59:51.591439   11613 config.go:182] Loaded profile config "functional-543000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 15:59:51.596001   11613 out.go:177] * The control-plane node functional-543000 host is not running: state=Stopped
	I1025 15:59:51.599837   11613 out.go:177]   To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-543000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-543000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-543000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 update-context --alsologtostderr -v=2: exit status 83 (46.716458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 15:59:51.543463   11611 out.go:345] Setting OutFile to fd 1 ...
	I1025 15:59:51.543665   11611 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 15:59:51.543668   11611 out.go:358] Setting ErrFile to fd 2...
	I1025 15:59:51.543671   11611 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 15:59:51.543791   11611 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 15:59:51.544022   11611 mustload.go:65] Loading cluster: functional-543000
	I1025 15:59:51.544231   11611 config.go:182] Loaded profile config "functional-543000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 15:59:51.548902   11611 out.go:177] * The control-plane node functional-543000 host is not running: state=Stopped
	I1025 15:59:51.552990   11611 out.go:177]   To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-543000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-543000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-543000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-543000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1437: (dbg) Non-zero exit: kubectl --context functional-543000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.909959ms)

                                                
                                                
** stderr ** 
	error: context "functional-543000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1443: failed to create hello-node deployment with this command "kubectl --context functional-543000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 service list
functional_test.go:1459: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 service list: exit status 83 (49.856542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
functional_test.go:1461: failed to do service list. args "out/minikube-darwin-arm64 -p functional-543000 service list" : exit status 83
functional_test.go:1464: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-543000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-543000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 service list -o json
functional_test.go:1489: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 service list -o json: exit status 83 (45.861792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
functional_test.go:1491: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-543000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 service --namespace=default --https --url hello-node
functional_test.go:1509: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 service --namespace=default --https --url hello-node: exit status 83 (45.924083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
functional_test.go:1511: failed to get service url. args "out/minikube-darwin-arm64 -p functional-543000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 service hello-node --url --format={{.IP}}
functional_test.go:1540: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 service hello-node --url --format={{.IP}}: exit status 83 (46.775417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
functional_test.go:1542: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-543000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1548: "* The control-plane node functional-543000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-543000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 service hello-node --url
functional_test.go:1559: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 service hello-node --url: exit status 83 (43.786667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
functional_test.go:1561: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-543000 service hello-node --url": exit status 83
functional_test.go:1565: found endpoint for hello-node: * The control-plane node functional-543000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-543000"
functional_test.go:1569: failed to parse "* The control-plane node functional-543000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-543000\"": parse "* The control-plane node functional-543000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-543000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-543000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-543000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I1025 15:59:18.478143   11412 out.go:345] Setting OutFile to fd 1 ...
I1025 15:59:18.478328   11412 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 15:59:18.478332   11412 out.go:358] Setting ErrFile to fd 2...
I1025 15:59:18.478334   11412 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 15:59:18.478463   11412 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
I1025 15:59:18.478676   11412 mustload.go:65] Loading cluster: functional-543000
I1025 15:59:18.478911   11412 config.go:182] Loaded profile config "functional-543000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1025 15:59:18.484015   11412 out.go:177] * The control-plane node functional-543000 host is not running: state=Stopped
I1025 15:59:18.496103   11412 out.go:177]   To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
stdout: * The control-plane node functional-543000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-543000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-543000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 11411: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-543000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-543000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-543000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-543000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-543000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-543000": client config: context "functional-543000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (117.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I1025 15:59:18.563062   10998 retry.go:31] will retry after 2.733480012s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-543000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-543000 get svc nginx-svc: exit status 1 (69.865666ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-543000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-543000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (117.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 image load --daemon kicbase/echo-server:functional-543000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-543000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 image load --daemon kicbase/echo-server:functional-543000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-543000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-543000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 image load --daemon kicbase/echo-server:functional-543000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-543000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 image save kicbase/echo-server:functional-543000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:386: expected "/Users/jenkins/workspace/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-543000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I1025 16:01:15.650157   10998 config.go:182] Loaded profile config "functional-543000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.036122917s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 12 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (30.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I1025 16:01:40.783748   10998 config.go:182] Loaded profile config "functional-543000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1025 16:01:50.785901   10998 retry.go:31] will retry after 3.887154075s: Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I1025 16:02:04.677452   10998 retry.go:31] will retry after 5.240721143s: Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": dial tcp: lookup nginx-svc.default.svc.cluster.local. on 8.8.8.8:53: read udp 207.254.73.72:61780->10.96.0.10:53: i/o timeout
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (30.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (10.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-563000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-563000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.959232042s)

                                                
                                                
-- stdout --
	* [ha-563000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-563000" primary control-plane node in "ha-563000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-563000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:02:11.195746   11947 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:02:11.195910   11947 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:02:11.195913   11947 out.go:358] Setting ErrFile to fd 2...
	I1025 16:02:11.195916   11947 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:02:11.196045   11947 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:02:11.197156   11947 out.go:352] Setting JSON to false
	I1025 16:02:11.214967   11947 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6569,"bootTime":1729890762,"procs":507,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 16:02:11.215029   11947 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 16:02:11.220490   11947 out.go:177] * [ha-563000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 16:02:11.228444   11947 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 16:02:11.228501   11947 notify.go:220] Checking for updates...
	I1025 16:02:11.234491   11947 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:02:11.236000   11947 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 16:02:11.239501   11947 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 16:02:11.242529   11947 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	I1025 16:02:11.245529   11947 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 16:02:11.248713   11947 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 16:02:11.253545   11947 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 16:02:11.260446   11947 start.go:297] selected driver: qemu2
	I1025 16:02:11.260453   11947 start.go:901] validating driver "qemu2" against <nil>
	I1025 16:02:11.260461   11947 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 16:02:11.263014   11947 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 16:02:11.267499   11947 out.go:177] * Automatically selected the socket_vmnet network
	I1025 16:02:11.270553   11947 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 16:02:11.270569   11947 cni.go:84] Creating CNI manager for ""
	I1025 16:02:11.270588   11947 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1025 16:02:11.270592   11947 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 16:02:11.270620   11947 start.go:340] cluster config:
	{Name:ha-563000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-563000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:02:11.275228   11947 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:02:11.283280   11947 out.go:177] * Starting "ha-563000" primary control-plane node in "ha-563000" cluster
	I1025 16:02:11.287483   11947 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1025 16:02:11.287500   11947 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1025 16:02:11.287507   11947 cache.go:56] Caching tarball of preloaded images
	I1025 16:02:11.287593   11947 preload.go:172] Found /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 16:02:11.287600   11947 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1025 16:02:11.287811   11947 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/ha-563000/config.json ...
	I1025 16:02:11.287824   11947 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/ha-563000/config.json: {Name:mk9ae879570626eaab4eb14811d2526a70394bd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:02:11.288212   11947 start.go:360] acquireMachinesLock for ha-563000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:02:11.288265   11947 start.go:364] duration metric: took 47.125µs to acquireMachinesLock for "ha-563000"
	I1025 16:02:11.288280   11947 start.go:93] Provisioning new machine with config: &{Name:ha-563000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-563000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:02:11.288315   11947 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:02:11.296484   11947 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 16:02:11.314211   11947 start.go:159] libmachine.API.Create for "ha-563000" (driver="qemu2")
	I1025 16:02:11.314245   11947 client.go:168] LocalClient.Create starting
	I1025 16:02:11.314320   11947 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:02:11.314358   11947 main.go:141] libmachine: Decoding PEM data...
	I1025 16:02:11.314369   11947 main.go:141] libmachine: Parsing certificate...
	I1025 16:02:11.314403   11947 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:02:11.314433   11947 main.go:141] libmachine: Decoding PEM data...
	I1025 16:02:11.314445   11947 main.go:141] libmachine: Parsing certificate...
	I1025 16:02:11.314910   11947 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:02:11.472511   11947 main.go:141] libmachine: Creating SSH key...
	I1025 16:02:11.673875   11947 main.go:141] libmachine: Creating Disk image...
	I1025 16:02:11.673884   11947 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:02:11.674123   11947 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/ha-563000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/ha-563000/disk.qcow2
	I1025 16:02:11.684555   11947 main.go:141] libmachine: STDOUT: 
	I1025 16:02:11.684576   11947 main.go:141] libmachine: STDERR: 
	I1025 16:02:11.684637   11947 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/ha-563000/disk.qcow2 +20000M
	I1025 16:02:11.693045   11947 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:02:11.693066   11947 main.go:141] libmachine: STDERR: 
	I1025 16:02:11.693079   11947 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/ha-563000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/ha-563000/disk.qcow2
	I1025 16:02:11.693083   11947 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:02:11.693092   11947 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:02:11.693124   11947 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/ha-563000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/ha-563000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/ha-563000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:4a:91:60:0c:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/ha-563000/disk.qcow2
	I1025 16:02:11.694933   11947 main.go:141] libmachine: STDOUT: 
	I1025 16:02:11.694956   11947 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:02:11.694975   11947 client.go:171] duration metric: took 380.729708ms to LocalClient.Create
	I1025 16:02:13.697191   11947 start.go:128] duration metric: took 2.408888125s to createHost
	I1025 16:02:13.697292   11947 start.go:83] releasing machines lock for "ha-563000", held for 2.409048125s
	W1025 16:02:13.697348   11947 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:02:13.711466   11947 out.go:177] * Deleting "ha-563000" in qemu2 ...
	W1025 16:02:13.739271   11947 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:02:13.739298   11947 start.go:729] Will try again in 5 seconds ...
	I1025 16:02:18.741425   11947 start.go:360] acquireMachinesLock for ha-563000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:02:18.742110   11947 start.go:364] duration metric: took 551.916µs to acquireMachinesLock for "ha-563000"
	I1025 16:02:18.742261   11947 start.go:93] Provisioning new machine with config: &{Name:ha-563000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-563000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:02:18.742502   11947 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:02:18.754240   11947 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 16:02:18.802947   11947 start.go:159] libmachine.API.Create for "ha-563000" (driver="qemu2")
	I1025 16:02:18.802990   11947 client.go:168] LocalClient.Create starting
	I1025 16:02:18.803109   11947 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:02:18.803194   11947 main.go:141] libmachine: Decoding PEM data...
	I1025 16:02:18.803209   11947 main.go:141] libmachine: Parsing certificate...
	I1025 16:02:18.803269   11947 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:02:18.803326   11947 main.go:141] libmachine: Decoding PEM data...
	I1025 16:02:18.803355   11947 main.go:141] libmachine: Parsing certificate...
	I1025 16:02:18.804024   11947 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:02:18.972628   11947 main.go:141] libmachine: Creating SSH key...
	I1025 16:02:19.055661   11947 main.go:141] libmachine: Creating Disk image...
	I1025 16:02:19.055666   11947 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:02:19.055881   11947 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/ha-563000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/ha-563000/disk.qcow2
	I1025 16:02:19.065931   11947 main.go:141] libmachine: STDOUT: 
	I1025 16:02:19.065946   11947 main.go:141] libmachine: STDERR: 
	I1025 16:02:19.066013   11947 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/ha-563000/disk.qcow2 +20000M
	I1025 16:02:19.074474   11947 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:02:19.074495   11947 main.go:141] libmachine: STDERR: 
	I1025 16:02:19.074508   11947 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/ha-563000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/ha-563000/disk.qcow2
	I1025 16:02:19.074513   11947 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:02:19.074521   11947 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:02:19.074555   11947 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/ha-563000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/ha-563000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/ha-563000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:93:5b:ed:64:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/ha-563000/disk.qcow2
	I1025 16:02:19.076300   11947 main.go:141] libmachine: STDOUT: 
	I1025 16:02:19.076321   11947 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:02:19.076334   11947 client.go:171] duration metric: took 273.34375ms to LocalClient.Create
	I1025 16:02:21.078527   11947 start.go:128] duration metric: took 2.335986875s to createHost
	I1025 16:02:21.078765   11947 start.go:83] releasing machines lock for "ha-563000", held for 2.336505667s
	W1025 16:02:21.079143   11947 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-563000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-563000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:02:21.092958   11947 out.go:201] 
	W1025 16:02:21.095931   11947 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:02:21.095956   11947 out.go:270] * 
	* 
	W1025 16:02:21.098607   11947 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 16:02:21.106887   11947 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-563000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-563000 -n ha-563000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-563000 -n ha-563000: exit status 7 (73.329959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-563000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (10.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (114.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-563000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-563000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (64.5725ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-563000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-563000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-563000 -- rollout status deployment/busybox: exit status 1 (61.958292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-563000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-563000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-563000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (61.874417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-563000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1025 16:02:21.385840   10998 retry.go:31] will retry after 794.150005ms: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-563000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-563000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.357541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-563000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1025 16:02:22.292645   10998 retry.go:31] will retry after 1.451658873s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-563000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-563000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.1535ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-563000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1025 16:02:23.854810   10998 retry.go:31] will retry after 1.431335762s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-563000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-563000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.391541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-563000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1025 16:02:25.397822   10998 retry.go:31] will retry after 3.693461852s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-563000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-563000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.092708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-563000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1025 16:02:29.202801   10998 retry.go:31] will retry after 3.362194141s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-563000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-563000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.045917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-563000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1025 16:02:32.676512   10998 retry.go:31] will retry after 4.526579401s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-563000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-563000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.74525ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-563000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1025 16:02:37.314130   10998 retry.go:31] will retry after 14.431200671s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-563000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-563000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.283291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-563000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1025 16:02:51.856785   10998 retry.go:31] will retry after 13.80625904s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-563000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-563000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.151791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-563000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1025 16:03:05.775365   10998 retry.go:31] will retry after 27.782361022s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-563000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-563000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.85175ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-563000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1025 16:03:33.669725   10998 retry.go:31] will retry after 42.029944782s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-563000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-563000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.876583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-563000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-563000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-563000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (62.240291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-563000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-563000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-563000 -- exec  -- nslookup kubernetes.io: exit status 1 (62.427625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-563000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-563000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-563000 -- exec  -- nslookup kubernetes.default: exit status 1 (61.795333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-563000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-563000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-563000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (62.287166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-563000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-563000 -n ha-563000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-563000 -n ha-563000: exit status 7 (35.223875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-563000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (114.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-563000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-563000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (61.946125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-563000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-563000 -n ha-563000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-563000 -n ha-563000: exit status 7 (34.815875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-563000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-563000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-563000 -v=7 --alsologtostderr: exit status 83 (45.905084ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-563000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-563000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:04:16.224916   12040 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:04:16.225347   12040 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:04:16.225351   12040 out.go:358] Setting ErrFile to fd 2...
	I1025 16:04:16.225353   12040 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:04:16.225516   12040 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:04:16.225759   12040 mustload.go:65] Loading cluster: ha-563000
	I1025 16:04:16.225975   12040 config.go:182] Loaded profile config "ha-563000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:04:16.229234   12040 out.go:177] * The control-plane node ha-563000 host is not running: state=Stopped
	I1025 16:04:16.233077   12040 out.go:177]   To start a cluster, run: "minikube start -p ha-563000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-563000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-563000 -n ha-563000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-563000 -n ha-563000: exit status 7 (34.797833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-563000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-563000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-563000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.697916ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-563000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-563000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-563000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-563000 -n ha-563000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-563000 -n ha-563000: exit status 7 (35.044916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-563000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-563000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-563000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-563000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-563000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-563000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-563000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-563000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-563000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-563000 -n ha-563000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-563000 -n ha-563000: exit status 7 (35.010666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-563000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-darwin-arm64 -p ha-563000 status --output json -v=7 --alsologtostderr
ha_test.go:328: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-563000 status --output json -v=7 --alsologtostderr: exit status 7 (34.088166ms)

                                                
                                                
-- stdout --
	{"Name":"ha-563000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:04:16.454433   12052 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:04:16.454626   12052 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:04:16.454629   12052 out.go:358] Setting ErrFile to fd 2...
	I1025 16:04:16.454631   12052 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:04:16.454762   12052 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:04:16.454884   12052 out.go:352] Setting JSON to true
	I1025 16:04:16.454895   12052 mustload.go:65] Loading cluster: ha-563000
	I1025 16:04:16.454953   12052 notify.go:220] Checking for updates...
	I1025 16:04:16.455102   12052 config.go:182] Loaded profile config "ha-563000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:04:16.455113   12052 status.go:174] checking status of ha-563000 ...
	I1025 16:04:16.455362   12052 status.go:371] ha-563000 host status = "Stopped" (err=<nil>)
	I1025 16:04:16.455365   12052 status.go:384] host is not running, skipping remaining checks
	I1025 16:04:16.455367   12052 status.go:176] ha-563000 status: &{Name:ha-563000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:335: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-563000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-563000 -n ha-563000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-563000 -n ha-563000: exit status 7 (34.855958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-563000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p ha-563000 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-563000 node stop m02 -v=7 --alsologtostderr: exit status 85 (52.83925ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:04:16.524263   12056 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:04:16.524739   12056 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:04:16.524743   12056 out.go:358] Setting ErrFile to fd 2...
	I1025 16:04:16.524745   12056 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:04:16.524929   12056 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:04:16.525174   12056 mustload.go:65] Loading cluster: ha-563000
	I1025 16:04:16.525374   12056 config.go:182] Loaded profile config "ha-563000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:04:16.529969   12056 out.go:201] 
	W1025 16:04:16.533894   12056 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1025 16:04:16.533899   12056 out.go:270] * 
	* 
	W1025 16:04:16.535706   12056 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 16:04:16.539960   12056 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-563000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:371: (dbg) Run:  out/minikube-darwin-arm64 -p ha-563000 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-563000 status -v=7 --alsologtostderr: exit status 7 (34.8715ms)

                                                
                                                
-- stdout --
	ha-563000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:04:16.577915   12058 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:04:16.578104   12058 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:04:16.578107   12058 out.go:358] Setting ErrFile to fd 2...
	I1025 16:04:16.578109   12058 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:04:16.578224   12058 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:04:16.578342   12058 out.go:352] Setting JSON to false
	I1025 16:04:16.578353   12058 mustload.go:65] Loading cluster: ha-563000
	I1025 16:04:16.578410   12058 notify.go:220] Checking for updates...
	I1025 16:04:16.578554   12058 config.go:182] Loaded profile config "ha-563000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:04:16.578563   12058 status.go:174] checking status of ha-563000 ...
	I1025 16:04:16.578824   12058 status.go:371] ha-563000 host status = "Stopped" (err=<nil>)
	I1025 16:04:16.578828   12058 status.go:384] host is not running, skipping remaining checks
	I1025 16:04:16.578830   12058 status.go:176] ha-563000 status: &{Name:ha-563000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-563000 status -v=7 --alsologtostderr": ha-563000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:380: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-563000 status -v=7 --alsologtostderr": ha-563000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:383: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-563000 status -v=7 --alsologtostderr": ha-563000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:386: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-563000 status -v=7 --alsologtostderr": ha-563000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-563000 -n ha-563000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-563000 -n ha-563000: exit status 7 (34.820125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-563000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-563000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-563000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-563000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-563000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-563000 -n ha-563000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-563000 -n ha-563000: exit status 7 (34.338ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-563000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (50.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p ha-563000 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-563000 node start m02 -v=7 --alsologtostderr: exit status 85 (52.358ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:04:16.733929   12067 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:04:16.734326   12067 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:04:16.734330   12067 out.go:358] Setting ErrFile to fd 2...
	I1025 16:04:16.734332   12067 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:04:16.734459   12067 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:04:16.734686   12067 mustload.go:65] Loading cluster: ha-563000
	I1025 16:04:16.734871   12067 config.go:182] Loaded profile config "ha-563000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:04:16.738943   12067 out.go:201] 
	W1025 16:04:16.742872   12067 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1025 16:04:16.742877   12067 out.go:270] * 
	* 
	W1025 16:04:16.744831   12067 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 16:04:16.748909   12067 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:424: I1025 16:04:16.733929   12067 out.go:345] Setting OutFile to fd 1 ...
I1025 16:04:16.734326   12067 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 16:04:16.734330   12067 out.go:358] Setting ErrFile to fd 2...
I1025 16:04:16.734332   12067 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 16:04:16.734459   12067 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
I1025 16:04:16.734686   12067 mustload.go:65] Loading cluster: ha-563000
I1025 16:04:16.734871   12067 config.go:182] Loaded profile config "ha-563000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1025 16:04:16.738943   12067 out.go:201] 
W1025 16:04:16.742872   12067 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W1025 16:04:16.742877   12067 out.go:270] * 
* 
W1025 16:04:16.744831   12067 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1025 16:04:16.748909   12067 out.go:201] 

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-563000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-563000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-563000 status -v=7 --alsologtostderr: exit status 7 (34.713875ms)

                                                
                                                
-- stdout --
	ha-563000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:04:16.786778   12069 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:04:16.786965   12069 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:04:16.786969   12069 out.go:358] Setting ErrFile to fd 2...
	I1025 16:04:16.786971   12069 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:04:16.787106   12069 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:04:16.787238   12069 out.go:352] Setting JSON to false
	I1025 16:04:16.787249   12069 mustload.go:65] Loading cluster: ha-563000
	I1025 16:04:16.787310   12069 notify.go:220] Checking for updates...
	I1025 16:04:16.787474   12069 config.go:182] Loaded profile config "ha-563000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:04:16.787483   12069 status.go:174] checking status of ha-563000 ...
	I1025 16:04:16.787735   12069 status.go:371] ha-563000 host status = "Stopped" (err=<nil>)
	I1025 16:04:16.787738   12069 status.go:384] host is not running, skipping remaining checks
	I1025 16:04:16.787743   12069 status.go:176] ha-563000 status: &{Name:ha-563000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1025 16:04:16.788656   10998 retry.go:31] will retry after 1.438309397s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-563000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-563000 status -v=7 --alsologtostderr: exit status 7 (80.29175ms)

                                                
                                                
-- stdout --
	ha-563000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:04:18.307525   12071 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:04:18.307747   12071 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:04:18.307751   12071 out.go:358] Setting ErrFile to fd 2...
	I1025 16:04:18.307754   12071 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:04:18.307894   12071 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:04:18.308021   12071 out.go:352] Setting JSON to false
	I1025 16:04:18.308033   12071 mustload.go:65] Loading cluster: ha-563000
	I1025 16:04:18.308072   12071 notify.go:220] Checking for updates...
	I1025 16:04:18.308270   12071 config.go:182] Loaded profile config "ha-563000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:04:18.308279   12071 status.go:174] checking status of ha-563000 ...
	I1025 16:04:18.308557   12071 status.go:371] ha-563000 host status = "Stopped" (err=<nil>)
	I1025 16:04:18.308561   12071 status.go:384] host is not running, skipping remaining checks
	I1025 16:04:18.308563   12071 status.go:176] ha-563000 status: &{Name:ha-563000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1025 16:04:18.309595   10998 retry.go:31] will retry after 1.944574131s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-563000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-563000 status -v=7 --alsologtostderr: exit status 7 (81.120917ms)

                                                
                                                
-- stdout --
	ha-563000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:04:20.335486   12073 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:04:20.335709   12073 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:04:20.335715   12073 out.go:358] Setting ErrFile to fd 2...
	I1025 16:04:20.335718   12073 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:04:20.335886   12073 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:04:20.336033   12073 out.go:352] Setting JSON to false
	I1025 16:04:20.336045   12073 mustload.go:65] Loading cluster: ha-563000
	I1025 16:04:20.336086   12073 notify.go:220] Checking for updates...
	I1025 16:04:20.336293   12073 config.go:182] Loaded profile config "ha-563000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:04:20.336303   12073 status.go:174] checking status of ha-563000 ...
	I1025 16:04:20.336605   12073 status.go:371] ha-563000 host status = "Stopped" (err=<nil>)
	I1025 16:04:20.336609   12073 status.go:384] host is not running, skipping remaining checks
	I1025 16:04:20.336612   12073 status.go:176] ha-563000 status: &{Name:ha-563000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1025 16:04:20.337635   10998 retry.go:31] will retry after 3.008289508s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-563000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-563000 status -v=7 --alsologtostderr: exit status 7 (79.666417ms)

                                                
                                                
-- stdout --
	ha-563000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:04:23.425712   12075 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:04:23.425966   12075 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:04:23.425970   12075 out.go:358] Setting ErrFile to fd 2...
	I1025 16:04:23.425973   12075 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:04:23.426141   12075 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:04:23.426290   12075 out.go:352] Setting JSON to false
	I1025 16:04:23.426303   12075 mustload.go:65] Loading cluster: ha-563000
	I1025 16:04:23.426348   12075 notify.go:220] Checking for updates...
	I1025 16:04:23.426548   12075 config.go:182] Loaded profile config "ha-563000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:04:23.426557   12075 status.go:174] checking status of ha-563000 ...
	I1025 16:04:23.426862   12075 status.go:371] ha-563000 host status = "Stopped" (err=<nil>)
	I1025 16:04:23.426867   12075 status.go:384] host is not running, skipping remaining checks
	I1025 16:04:23.426870   12075 status.go:176] ha-563000 status: &{Name:ha-563000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1025 16:04:23.427905   10998 retry.go:31] will retry after 4.468971738s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-563000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-563000 status -v=7 --alsologtostderr: exit status 7 (81.294417ms)

                                                
                                                
-- stdout --
	ha-563000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:04:27.977042   12077 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:04:27.977256   12077 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:04:27.977260   12077 out.go:358] Setting ErrFile to fd 2...
	I1025 16:04:27.977263   12077 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:04:27.977440   12077 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:04:27.977584   12077 out.go:352] Setting JSON to false
	I1025 16:04:27.977601   12077 mustload.go:65] Loading cluster: ha-563000
	I1025 16:04:27.977645   12077 notify.go:220] Checking for updates...
	I1025 16:04:27.977854   12077 config.go:182] Loaded profile config "ha-563000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:04:27.977864   12077 status.go:174] checking status of ha-563000 ...
	I1025 16:04:27.978170   12077 status.go:371] ha-563000 host status = "Stopped" (err=<nil>)
	I1025 16:04:27.978174   12077 status.go:384] host is not running, skipping remaining checks
	I1025 16:04:27.978177   12077 status.go:176] ha-563000 status: &{Name:ha-563000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1025 16:04:27.979191   10998 retry.go:31] will retry after 3.419482581s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-563000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-563000 status -v=7 --alsologtostderr: exit status 7 (80.540584ms)

                                                
                                                
-- stdout --
	ha-563000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:04:31.479215   12079 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:04:31.479437   12079 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:04:31.479441   12079 out.go:358] Setting ErrFile to fd 2...
	I1025 16:04:31.479445   12079 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:04:31.479627   12079 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:04:31.479790   12079 out.go:352] Setting JSON to false
	I1025 16:04:31.479804   12079 mustload.go:65] Loading cluster: ha-563000
	I1025 16:04:31.479836   12079 notify.go:220] Checking for updates...
	I1025 16:04:31.480089   12079 config.go:182] Loaded profile config "ha-563000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:04:31.480098   12079 status.go:174] checking status of ha-563000 ...
	I1025 16:04:31.480428   12079 status.go:371] ha-563000 host status = "Stopped" (err=<nil>)
	I1025 16:04:31.480433   12079 status.go:384] host is not running, skipping remaining checks
	I1025 16:04:31.480436   12079 status.go:176] ha-563000 status: &{Name:ha-563000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1025 16:04:31.481550   10998 retry.go:31] will retry after 8.034713986s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-563000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-563000 status -v=7 --alsologtostderr: exit status 7 (78.775125ms)

                                                
                                                
-- stdout --
	ha-563000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:04:39.595210   12081 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:04:39.595425   12081 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:04:39.595429   12081 out.go:358] Setting ErrFile to fd 2...
	I1025 16:04:39.595432   12081 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:04:39.595579   12081 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:04:39.595731   12081 out.go:352] Setting JSON to false
	I1025 16:04:39.595743   12081 mustload.go:65] Loading cluster: ha-563000
	I1025 16:04:39.595788   12081 notify.go:220] Checking for updates...
	I1025 16:04:39.595993   12081 config.go:182] Loaded profile config "ha-563000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:04:39.596003   12081 status.go:174] checking status of ha-563000 ...
	I1025 16:04:39.596286   12081 status.go:371] ha-563000 host status = "Stopped" (err=<nil>)
	I1025 16:04:39.596290   12081 status.go:384] host is not running, skipping remaining checks
	I1025 16:04:39.596293   12081 status.go:176] ha-563000 status: &{Name:ha-563000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1025 16:04:39.597332   10998 retry.go:31] will retry after 14.641100776s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-563000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-563000 status -v=7 --alsologtostderr: exit status 7 (80.653917ms)

                                                
                                                
-- stdout --
	ha-563000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:04:54.319343   12091 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:04:54.319601   12091 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:04:54.319606   12091 out.go:358] Setting ErrFile to fd 2...
	I1025 16:04:54.319608   12091 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:04:54.319780   12091 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:04:54.319945   12091 out.go:352] Setting JSON to false
	I1025 16:04:54.319957   12091 mustload.go:65] Loading cluster: ha-563000
	I1025 16:04:54.320006   12091 notify.go:220] Checking for updates...
	I1025 16:04:54.320217   12091 config.go:182] Loaded profile config "ha-563000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:04:54.320225   12091 status.go:174] checking status of ha-563000 ...
	I1025 16:04:54.320518   12091 status.go:371] ha-563000 host status = "Stopped" (err=<nil>)
	I1025 16:04:54.320522   12091 status.go:384] host is not running, skipping remaining checks
	I1025 16:04:54.320525   12091 status.go:176] ha-563000 status: &{Name:ha-563000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1025 16:04:54.321567   10998 retry.go:31] will retry after 12.505635928s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-563000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-563000 status -v=7 --alsologtostderr: exit status 7 (80.6015ms)

                                                
                                                
-- stdout --
	ha-563000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:05:06.907843   12093 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:05:06.908056   12093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:05:06.908060   12093 out.go:358] Setting ErrFile to fd 2...
	I1025 16:05:06.908063   12093 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:05:06.908216   12093 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:05:06.908354   12093 out.go:352] Setting JSON to false
	I1025 16:05:06.908375   12093 mustload.go:65] Loading cluster: ha-563000
	I1025 16:05:06.908416   12093 notify.go:220] Checking for updates...
	I1025 16:05:06.908618   12093 config.go:182] Loaded profile config "ha-563000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:05:06.908627   12093 status.go:174] checking status of ha-563000 ...
	I1025 16:05:06.908925   12093 status.go:371] ha-563000 host status = "Stopped" (err=<nil>)
	I1025 16:05:06.908929   12093 status.go:384] host is not running, skipping remaining checks
	I1025 16:05:06.908932   12093 status.go:176] ha-563000 status: &{Name:ha-563000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:434: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-563000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-563000 -n ha-563000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-563000 -n ha-563000: exit status 7 (35.730709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-563000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (50.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-563000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-563000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-563000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-563000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-563000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-563000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-563000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-563000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-563000 -n ha-563000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-563000 -n ha-563000: exit status 7 (34.539417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-563000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (9.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-563000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-563000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-563000 -v=7 --alsologtostderr: (3.729871416s)
ha_test.go:469: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-563000 --wait=true -v=7 --alsologtostderr
ha_test.go:469: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-563000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.2401155s)

                                                
                                                
-- stdout --
	* [ha-563000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-563000" primary control-plane node in "ha-563000" cluster
	* Restarting existing qemu2 VM for "ha-563000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-563000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:05:10.873498   12127 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:05:10.873716   12127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:05:10.873724   12127 out.go:358] Setting ErrFile to fd 2...
	I1025 16:05:10.873727   12127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:05:10.873883   12127 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:05:10.875266   12127 out.go:352] Setting JSON to false
	I1025 16:05:10.895969   12127 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6748,"bootTime":1729890762,"procs":511,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 16:05:10.896031   12127 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 16:05:10.900226   12127 out.go:177] * [ha-563000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 16:05:10.908269   12127 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 16:05:10.908313   12127 notify.go:220] Checking for updates...
	I1025 16:05:10.915256   12127 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:05:10.918255   12127 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 16:05:10.921222   12127 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 16:05:10.924252   12127 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	I1025 16:05:10.927266   12127 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 16:05:10.930440   12127 config.go:182] Loaded profile config "ha-563000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:05:10.930509   12127 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 16:05:10.935195   12127 out.go:177] * Using the qemu2 driver based on existing profile
	I1025 16:05:10.941109   12127 start.go:297] selected driver: qemu2
	I1025 16:05:10.941115   12127 start.go:901] validating driver "qemu2" against &{Name:ha-563000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-563000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:05:10.941180   12127 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 16:05:10.943717   12127 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 16:05:10.943749   12127 cni.go:84] Creating CNI manager for ""
	I1025 16:05:10.943776   12127 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1025 16:05:10.943831   12127 start.go:340] cluster config:
	{Name:ha-563000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-563000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:05:10.948480   12127 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:05:10.956251   12127 out.go:177] * Starting "ha-563000" primary control-plane node in "ha-563000" cluster
	I1025 16:05:10.960209   12127 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1025 16:05:10.960228   12127 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1025 16:05:10.960238   12127 cache.go:56] Caching tarball of preloaded images
	I1025 16:05:10.960323   12127 preload.go:172] Found /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 16:05:10.960329   12127 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1025 16:05:10.960378   12127 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/ha-563000/config.json ...
	I1025 16:05:10.960799   12127 start.go:360] acquireMachinesLock for ha-563000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:05:10.960848   12127 start.go:364] duration metric: took 43.291µs to acquireMachinesLock for "ha-563000"
	I1025 16:05:10.960857   12127 start.go:96] Skipping create...Using existing machine configuration
	I1025 16:05:10.960861   12127 fix.go:54] fixHost starting: 
	I1025 16:05:10.960995   12127 fix.go:112] recreateIfNeeded on ha-563000: state=Stopped err=<nil>
	W1025 16:05:10.961003   12127 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 16:05:10.969321   12127 out.go:177] * Restarting existing qemu2 VM for "ha-563000" ...
	I1025 16:05:10.973243   12127 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:05:10.973279   12127 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/ha-563000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/ha-563000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/ha-563000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:93:5b:ed:64:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/ha-563000/disk.qcow2
	I1025 16:05:10.975634   12127 main.go:141] libmachine: STDOUT: 
	I1025 16:05:10.975656   12127 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:05:10.975686   12127 fix.go:56] duration metric: took 14.823041ms for fixHost
	I1025 16:05:10.975692   12127 start.go:83] releasing machines lock for "ha-563000", held for 14.839125ms
	W1025 16:05:10.975698   12127 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:05:10.975732   12127 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:05:10.975737   12127 start.go:729] Will try again in 5 seconds ...
	I1025 16:05:15.977874   12127 start.go:360] acquireMachinesLock for ha-563000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:05:15.978287   12127 start.go:364] duration metric: took 303.292µs to acquireMachinesLock for "ha-563000"
	I1025 16:05:15.978423   12127 start.go:96] Skipping create...Using existing machine configuration
	I1025 16:05:15.978440   12127 fix.go:54] fixHost starting: 
	I1025 16:05:15.979061   12127 fix.go:112] recreateIfNeeded on ha-563000: state=Stopped err=<nil>
	W1025 16:05:15.979092   12127 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 16:05:15.988497   12127 out.go:177] * Restarting existing qemu2 VM for "ha-563000" ...
	I1025 16:05:15.992296   12127 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:05:15.992510   12127 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/ha-563000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/ha-563000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/ha-563000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:93:5b:ed:64:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/ha-563000/disk.qcow2
	I1025 16:05:16.002082   12127 main.go:141] libmachine: STDOUT: 
	I1025 16:05:16.002139   12127 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:05:16.002210   12127 fix.go:56] duration metric: took 23.768458ms for fixHost
	I1025 16:05:16.002230   12127 start.go:83] releasing machines lock for "ha-563000", held for 23.920334ms
	W1025 16:05:16.002371   12127 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-563000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-563000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:05:16.010435   12127 out.go:201] 
	W1025 16:05:16.014564   12127 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:05:16.014592   12127 out.go:270] * 
	* 
	W1025 16:05:16.017121   12127 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 16:05:16.025514   12127 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-563000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-563000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-563000 -n ha-563000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-563000 -n ha-563000: exit status 7 (36.546708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-563000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (9.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-darwin-arm64 -p ha-563000 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-563000 node delete m03 -v=7 --alsologtostderr: exit status 83 (46.97575ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-563000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-563000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:05:16.188300   12139 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:05:16.188946   12139 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:05:16.188950   12139 out.go:358] Setting ErrFile to fd 2...
	I1025 16:05:16.188952   12139 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:05:16.189132   12139 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:05:16.189373   12139 mustload.go:65] Loading cluster: ha-563000
	I1025 16:05:16.189601   12139 config.go:182] Loaded profile config "ha-563000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:05:16.193287   12139 out.go:177] * The control-plane node ha-563000 host is not running: state=Stopped
	I1025 16:05:16.197246   12139 out.go:177]   To start a cluster, run: "minikube start -p ha-563000"

                                                
                                                
** /stderr **
ha_test.go:491: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-563000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:495: (dbg) Run:  out/minikube-darwin-arm64 -p ha-563000 status -v=7 --alsologtostderr
ha_test.go:495: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-563000 status -v=7 --alsologtostderr: exit status 7 (34.592292ms)

                                                
                                                
-- stdout --
	ha-563000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:05:16.235044   12141 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:05:16.235221   12141 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:05:16.235226   12141 out.go:358] Setting ErrFile to fd 2...
	I1025 16:05:16.235229   12141 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:05:16.235395   12141 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:05:16.235506   12141 out.go:352] Setting JSON to false
	I1025 16:05:16.235522   12141 mustload.go:65] Loading cluster: ha-563000
	I1025 16:05:16.235562   12141 notify.go:220] Checking for updates...
	I1025 16:05:16.235735   12141 config.go:182] Loaded profile config "ha-563000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:05:16.235743   12141 status.go:174] checking status of ha-563000 ...
	I1025 16:05:16.235968   12141 status.go:371] ha-563000 host status = "Stopped" (err=<nil>)
	I1025 16:05:16.235972   12141 status.go:384] host is not running, skipping remaining checks
	I1025 16:05:16.235974   12141 status.go:176] ha-563000 status: &{Name:ha-563000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-563000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-563000 -n ha-563000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-563000 -n ha-563000: exit status 7 (34.660917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-563000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-563000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-563000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-563000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-563000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-563000 -n ha-563000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-563000 -n ha-563000: exit status 7 (35.221542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-563000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-darwin-arm64 -p ha-563000 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-darwin-arm64 -p ha-563000 stop -v=7 --alsologtostderr: (3.36706825s)
ha_test.go:539: (dbg) Run:  out/minikube-darwin-arm64 -p ha-563000 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-563000 status -v=7 --alsologtostderr: exit status 7 (71.0785ms)

                                                
                                                
-- stdout --
	ha-563000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:05:19.797098   12170 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:05:19.797303   12170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:05:19.797307   12170 out.go:358] Setting ErrFile to fd 2...
	I1025 16:05:19.797310   12170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:05:19.797461   12170 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:05:19.797603   12170 out.go:352] Setting JSON to false
	I1025 16:05:19.797615   12170 mustload.go:65] Loading cluster: ha-563000
	I1025 16:05:19.797650   12170 notify.go:220] Checking for updates...
	I1025 16:05:19.797855   12170 config.go:182] Loaded profile config "ha-563000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:05:19.797864   12170 status.go:174] checking status of ha-563000 ...
	I1025 16:05:19.798147   12170 status.go:371] ha-563000 host status = "Stopped" (err=<nil>)
	I1025 16:05:19.798151   12170 status.go:384] host is not running, skipping remaining checks
	I1025 16:05:19.798154   12170 status.go:176] ha-563000 status: &{Name:ha-563000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-563000 status -v=7 --alsologtostderr": ha-563000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-563000 status -v=7 --alsologtostderr": ha-563000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-563000 status -v=7 --alsologtostderr": ha-563000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-563000 -n ha-563000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-563000 -n ha-563000: exit status 7 (35.197708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-563000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-563000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:562: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-563000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.18665125s)

                                                
                                                
-- stdout --
	* [ha-563000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-563000" primary control-plane node in "ha-563000" cluster
	* Restarting existing qemu2 VM for "ha-563000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-563000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:05:19.866608   12174 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:05:19.866758   12174 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:05:19.866761   12174 out.go:358] Setting ErrFile to fd 2...
	I1025 16:05:19.866764   12174 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:05:19.866879   12174 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:05:19.867931   12174 out.go:352] Setting JSON to false
	I1025 16:05:19.885536   12174 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6757,"bootTime":1729890762,"procs":513,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 16:05:19.885608   12174 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 16:05:19.890070   12174 out.go:177] * [ha-563000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 16:05:19.897949   12174 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 16:05:19.897999   12174 notify.go:220] Checking for updates...
	I1025 16:05:19.906080   12174 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:05:19.908966   12174 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 16:05:19.912090   12174 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 16:05:19.915084   12174 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	I1025 16:05:19.916314   12174 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 16:05:19.919307   12174 config.go:182] Loaded profile config "ha-563000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:05:19.919584   12174 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 16:05:19.924007   12174 out.go:177] * Using the qemu2 driver based on existing profile
	I1025 16:05:19.929029   12174 start.go:297] selected driver: qemu2
	I1025 16:05:19.929043   12174 start.go:901] validating driver "qemu2" against &{Name:ha-563000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-563000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:05:19.929096   12174 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 16:05:19.931529   12174 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 16:05:19.931553   12174 cni.go:84] Creating CNI manager for ""
	I1025 16:05:19.931575   12174 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1025 16:05:19.931626   12174 start.go:340] cluster config:
	{Name:ha-563000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-563000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:05:19.936031   12174 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:05:19.943993   12174 out.go:177] * Starting "ha-563000" primary control-plane node in "ha-563000" cluster
	I1025 16:05:19.948106   12174 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1025 16:05:19.948123   12174 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1025 16:05:19.948131   12174 cache.go:56] Caching tarball of preloaded images
	I1025 16:05:19.948190   12174 preload.go:172] Found /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 16:05:19.948196   12174 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1025 16:05:19.948252   12174 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/ha-563000/config.json ...
	I1025 16:05:19.948668   12174 start.go:360] acquireMachinesLock for ha-563000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:05:19.948698   12174 start.go:364] duration metric: took 23.833µs to acquireMachinesLock for "ha-563000"
	I1025 16:05:19.948706   12174 start.go:96] Skipping create...Using existing machine configuration
	I1025 16:05:19.948711   12174 fix.go:54] fixHost starting: 
	I1025 16:05:19.948830   12174 fix.go:112] recreateIfNeeded on ha-563000: state=Stopped err=<nil>
	W1025 16:05:19.948836   12174 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 16:05:19.952938   12174 out.go:177] * Restarting existing qemu2 VM for "ha-563000" ...
	I1025 16:05:19.961030   12174 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:05:19.961069   12174 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/ha-563000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/ha-563000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/ha-563000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:93:5b:ed:64:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/ha-563000/disk.qcow2
	I1025 16:05:19.963245   12174 main.go:141] libmachine: STDOUT: 
	I1025 16:05:19.963265   12174 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:05:19.963294   12174 fix.go:56] duration metric: took 14.582166ms for fixHost
	I1025 16:05:19.963299   12174 start.go:83] releasing machines lock for "ha-563000", held for 14.5975ms
	W1025 16:05:19.963305   12174 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:05:19.963343   12174 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:05:19.963347   12174 start.go:729] Will try again in 5 seconds ...
	I1025 16:05:24.964946   12174 start.go:360] acquireMachinesLock for ha-563000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:05:24.965368   12174 start.go:364] duration metric: took 334.625µs to acquireMachinesLock for "ha-563000"
	I1025 16:05:24.965493   12174 start.go:96] Skipping create...Using existing machine configuration
	I1025 16:05:24.965514   12174 fix.go:54] fixHost starting: 
	I1025 16:05:24.966213   12174 fix.go:112] recreateIfNeeded on ha-563000: state=Stopped err=<nil>
	W1025 16:05:24.966238   12174 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 16:05:24.970641   12174 out.go:177] * Restarting existing qemu2 VM for "ha-563000" ...
	I1025 16:05:24.974724   12174 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:05:24.974989   12174 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/ha-563000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/ha-563000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/ha-563000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:93:5b:ed:64:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/ha-563000/disk.qcow2
	I1025 16:05:24.984785   12174 main.go:141] libmachine: STDOUT: 
	I1025 16:05:24.984846   12174 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:05:24.984912   12174 fix.go:56] duration metric: took 19.401167ms for fixHost
	I1025 16:05:24.984932   12174 start.go:83] releasing machines lock for "ha-563000", held for 19.539417ms
	W1025 16:05:24.985092   12174 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-563000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-563000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:05:24.993680   12174 out.go:201] 
	W1025 16:05:24.997814   12174 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:05:24.997843   12174 out.go:270] * 
	* 
	W1025 16:05:25.000499   12174 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 16:05:25.007598   12174 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-563000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-563000 -n ha-563000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-563000 -n ha-563000: exit status 7 (76.297375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-563000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-563000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-563000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-563000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-563000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-563000 -n ha-563000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-563000 -n ha-563000: exit status 7 (34.810584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-563000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-563000 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-563000 --control-plane -v=7 --alsologtostderr: exit status 83 (46.808375ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-563000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-563000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:05:25.221850   12189 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:05:25.222040   12189 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:05:25.222043   12189 out.go:358] Setting ErrFile to fd 2...
	I1025 16:05:25.222046   12189 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:05:25.222195   12189 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:05:25.222440   12189 mustload.go:65] Loading cluster: ha-563000
	I1025 16:05:25.222678   12189 config.go:182] Loaded profile config "ha-563000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:05:25.227202   12189 out.go:177] * The control-plane node ha-563000 host is not running: state=Stopped
	I1025 16:05:25.231246   12189 out.go:177]   To start a cluster, run: "minikube start -p ha-563000"

                                                
                                                
** /stderr **
ha_test.go:609: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-563000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-563000 -n ha-563000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-563000 -n ha-563000: exit status 7 (34.833208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-563000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-563000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-563000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-563000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-563000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-563000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-563000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-563000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-563000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-563000 -n ha-563000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-563000 -n ha-563000: exit status 7 (35.053167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-563000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.09s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.98s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-950000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-950000 --driver=qemu2 : exit status 80 (9.905474417s)

                                                
                                                
-- stdout --
	* [image-950000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-950000" primary control-plane node in "image-950000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-950000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-950000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-950000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-950000 -n image-950000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-950000 -n image-950000: exit status 7 (75.438334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-950000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.98s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.88s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-501000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-501000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.876472833s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"258a19e9-0949-4f53-85df-f1d593ed1eb9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-501000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4edbe2f1-842b-4f48-b082-238aa71e670c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19758"}}
	{"specversion":"1.0","id":"5c148a43-3405-45fa-a6bc-98ef5b824c6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig"}}
	{"specversion":"1.0","id":"d8a1f965-b72b-4f0f-884c-7de100dc4028","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"d0ad4e77-98c1-4e4c-9e12-1073e7e2798e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"07ed425b-79e5-4d4e-87ce-3c4863b56243","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube"}}
	{"specversion":"1.0","id":"9eaef179-e2f3-4642-aafe-d2991ce9ad19","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"01b70a8a-6100-44cc-86e8-26fb4c3e8cb5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"91f6bafa-fd68-473f-a3c2-a3bf7f2081d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"4e17e19c-664a-4d2f-a03b-d746dbde7a49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-501000\" primary control-plane node in \"json-output-501000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"8146196b-fd38-4ed3-ae1b-36a1971a0e2b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"9091b69a-67ce-4d94-a2e2-b1738f00bee8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-501000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"febc9ee7-bf92-4d00-8311-1140f6d61f5d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"fec82ac6-770a-4f08-b265-f1e4d27dc843","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"71190bfa-8c1b-40dd-bbf0-9f722d7c445c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-501000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"05bc656c-9831-463e-a9f0-b46a9a001424","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"c6287ea4-631c-48d4-adbe-d8a37864ceb0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-501000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.88s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.09s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-501000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-501000 --output=json --user=testUser: exit status 83 (88.616208ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cd102edc-2a2c-4dbb-8d4b-adf40387799e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-501000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"207c23fd-b9f3-4c69-8d93-0074d4df4636","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-501000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-501000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.09s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-501000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-501000 --output=json --user=testUser: exit status 83 (48.466292ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-501000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-501000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-501000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-501000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.07s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-311000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-311000 --driver=qemu2 : exit status 80 (9.746696042s)

                                                
                                                
-- stdout --
	* [first-311000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-311000" primary control-plane node in "first-311000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-311000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-311000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-311000 --driver=qemu2 ": exit status 80
panic.go:629: *** TestMinikubeProfile FAILED at 2024-10-25 16:05:57.495454 -0700 PDT m=+480.775978168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-313000 -n second-313000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-313000 -n second-313000: exit status 85 (84.562208ms)

                                                
                                                
-- stdout --
	* Profile "second-313000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-313000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-313000" host is not running, skipping log retrieval (state="* Profile \"second-313000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-313000\"")
helpers_test.go:175: Cleaning up "second-313000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-313000
panic.go:629: *** TestMinikubeProfile FAILED at 2024-10-25 16:05:57.69731 -0700 PDT m=+480.977836543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-311000 -n first-311000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-311000 -n first-311000: exit status 7 (34.799542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-311000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-311000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-311000
--- FAIL: TestMinikubeProfile (10.07s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.2s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-289000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-289000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.121375709s)

                                                
                                                
-- stdout --
	* [mount-start-1-289000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-289000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-289000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-289000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-289000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-289000 -n mount-start-1-289000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-289000 -n mount-start-1-289000: exit status 7 (73.341542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-289000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.20s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-747000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-747000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.842481333s)

                                                
                                                
-- stdout --
	* [multinode-747000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-747000" primary control-plane node in "multinode-747000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-747000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:06:08.237957   12328 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:06:08.238105   12328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:06:08.238108   12328 out.go:358] Setting ErrFile to fd 2...
	I1025 16:06:08.238110   12328 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:06:08.238231   12328 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:06:08.239302   12328 out.go:352] Setting JSON to false
	I1025 16:06:08.256829   12328 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6806,"bootTime":1729890762,"procs":506,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 16:06:08.256908   12328 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 16:06:08.261862   12328 out.go:177] * [multinode-747000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 16:06:08.269775   12328 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 16:06:08.269828   12328 notify.go:220] Checking for updates...
	I1025 16:06:08.284343   12328 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:06:08.287727   12328 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 16:06:08.290608   12328 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 16:06:08.293709   12328 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	I1025 16:06:08.296679   12328 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 16:06:08.298317   12328 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 16:06:08.305727   12328 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 16:06:08.311673   12328 start.go:297] selected driver: qemu2
	I1025 16:06:08.311680   12328 start.go:901] validating driver "qemu2" against <nil>
	I1025 16:06:08.311686   12328 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 16:06:08.314271   12328 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 16:06:08.318660   12328 out.go:177] * Automatically selected the socket_vmnet network
	I1025 16:06:08.321810   12328 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 16:06:08.321832   12328 cni.go:84] Creating CNI manager for ""
	I1025 16:06:08.321854   12328 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1025 16:06:08.321864   12328 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 16:06:08.321894   12328 start.go:340] cluster config:
	{Name:multinode-747000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-747000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:06:08.326984   12328 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:06:08.335700   12328 out.go:177] * Starting "multinode-747000" primary control-plane node in "multinode-747000" cluster
	I1025 16:06:08.339570   12328 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1025 16:06:08.339589   12328 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1025 16:06:08.339600   12328 cache.go:56] Caching tarball of preloaded images
	I1025 16:06:08.339685   12328 preload.go:172] Found /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 16:06:08.339691   12328 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1025 16:06:08.339948   12328 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/multinode-747000/config.json ...
	I1025 16:06:08.339959   12328 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/multinode-747000/config.json: {Name:mke83bba6188638ac43f4c04a1aba6e202b2d9c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:06:08.340319   12328 start.go:360] acquireMachinesLock for multinode-747000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:06:08.340370   12328 start.go:364] duration metric: took 45.292µs to acquireMachinesLock for "multinode-747000"
	I1025 16:06:08.340386   12328 start.go:93] Provisioning new machine with config: &{Name:multinode-747000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-747000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:06:08.340413   12328 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:06:08.347710   12328 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 16:06:08.365378   12328 start.go:159] libmachine.API.Create for "multinode-747000" (driver="qemu2")
	I1025 16:06:08.365412   12328 client.go:168] LocalClient.Create starting
	I1025 16:06:08.365481   12328 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:06:08.365518   12328 main.go:141] libmachine: Decoding PEM data...
	I1025 16:06:08.365531   12328 main.go:141] libmachine: Parsing certificate...
	I1025 16:06:08.365571   12328 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:06:08.365602   12328 main.go:141] libmachine: Decoding PEM data...
	I1025 16:06:08.365610   12328 main.go:141] libmachine: Parsing certificate...
	I1025 16:06:08.366029   12328 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:06:08.522800   12328 main.go:141] libmachine: Creating SSH key...
	I1025 16:06:08.580797   12328 main.go:141] libmachine: Creating Disk image...
	I1025 16:06:08.580806   12328 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:06:08.581010   12328 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/multinode-747000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/multinode-747000/disk.qcow2
	I1025 16:06:08.590832   12328 main.go:141] libmachine: STDOUT: 
	I1025 16:06:08.590854   12328 main.go:141] libmachine: STDERR: 
	I1025 16:06:08.590912   12328 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/multinode-747000/disk.qcow2 +20000M
	I1025 16:06:08.599412   12328 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:06:08.599426   12328 main.go:141] libmachine: STDERR: 
	I1025 16:06:08.599441   12328 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/multinode-747000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/multinode-747000/disk.qcow2
	I1025 16:06:08.599448   12328 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:06:08.599460   12328 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:06:08.599492   12328 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/multinode-747000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/multinode-747000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/multinode-747000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:55:4d:af:75:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/multinode-747000/disk.qcow2
	I1025 16:06:08.601317   12328 main.go:141] libmachine: STDOUT: 
	I1025 16:06:08.601332   12328 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:06:08.601351   12328 client.go:171] duration metric: took 235.935666ms to LocalClient.Create
	I1025 16:06:10.603506   12328 start.go:128] duration metric: took 2.263102708s to createHost
	I1025 16:06:10.603570   12328 start.go:83] releasing machines lock for "multinode-747000", held for 2.263219875s
	W1025 16:06:10.603619   12328 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:06:10.618961   12328 out.go:177] * Deleting "multinode-747000" in qemu2 ...
	W1025 16:06:10.646636   12328 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:06:10.646655   12328 start.go:729] Will try again in 5 seconds ...
	I1025 16:06:15.648820   12328 start.go:360] acquireMachinesLock for multinode-747000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:06:15.649363   12328 start.go:364] duration metric: took 447.333µs to acquireMachinesLock for "multinode-747000"
	I1025 16:06:15.649474   12328 start.go:93] Provisioning new machine with config: &{Name:multinode-747000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-747000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:06:15.649735   12328 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:06:15.663380   12328 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 16:06:15.712361   12328 start.go:159] libmachine.API.Create for "multinode-747000" (driver="qemu2")
	I1025 16:06:15.712420   12328 client.go:168] LocalClient.Create starting
	I1025 16:06:15.712561   12328 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:06:15.712641   12328 main.go:141] libmachine: Decoding PEM data...
	I1025 16:06:15.712659   12328 main.go:141] libmachine: Parsing certificate...
	I1025 16:06:15.712739   12328 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:06:15.712796   12328 main.go:141] libmachine: Decoding PEM data...
	I1025 16:06:15.712813   12328 main.go:141] libmachine: Parsing certificate...
	I1025 16:06:15.713920   12328 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:06:15.882059   12328 main.go:141] libmachine: Creating SSH key...
	I1025 16:06:15.975010   12328 main.go:141] libmachine: Creating Disk image...
	I1025 16:06:15.975017   12328 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:06:15.975206   12328 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/multinode-747000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/multinode-747000/disk.qcow2
	I1025 16:06:15.985045   12328 main.go:141] libmachine: STDOUT: 
	I1025 16:06:15.985069   12328 main.go:141] libmachine: STDERR: 
	I1025 16:06:15.985125   12328 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/multinode-747000/disk.qcow2 +20000M
	I1025 16:06:15.993586   12328 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:06:15.993602   12328 main.go:141] libmachine: STDERR: 
	I1025 16:06:15.993613   12328 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/multinode-747000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/multinode-747000/disk.qcow2
	I1025 16:06:15.993618   12328 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:06:15.993627   12328 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:06:15.993654   12328 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/multinode-747000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/multinode-747000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/multinode-747000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:bc:9c:ed:fb:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/multinode-747000/disk.qcow2
	I1025 16:06:15.995466   12328 main.go:141] libmachine: STDOUT: 
	I1025 16:06:15.995482   12328 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:06:15.995494   12328 client.go:171] duration metric: took 283.072208ms to LocalClient.Create
	I1025 16:06:17.997689   12328 start.go:128] duration metric: took 2.347915334s to createHost
	I1025 16:06:17.997774   12328 start.go:83] releasing machines lock for "multinode-747000", held for 2.348417541s
	W1025 16:06:17.998121   12328 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-747000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-747000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:06:18.013940   12328 out.go:201] 
	W1025 16:06:18.016887   12328 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:06:18.016914   12328 out.go:270] * 
	* 
	W1025 16:06:18.019765   12328 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 16:06:18.032837   12328 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-747000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-747000 -n multinode-747000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-747000 -n multinode-747000: exit status 7 (74.485417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-747000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.92s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (79.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-747000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-747000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (63.665667ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-747000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-747000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-747000 -- rollout status deployment/busybox: exit status 1 (61.632708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-747000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-747000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-747000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (62.129667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-747000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1025 16:06:18.312085   10998 retry.go:31] will retry after 1.164947072s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-747000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-747000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.2885ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-747000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1025 16:06:19.587703   10998 retry.go:31] will retry after 838.120228ms: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-747000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-747000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.852625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-747000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1025 16:06:20.536983   10998 retry.go:31] will retry after 3.047166316s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-747000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-747000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.297ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-747000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1025 16:06:23.695765   10998 retry.go:31] will retry after 3.15415965s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-747000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-747000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.920292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-747000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1025 16:06:26.961219   10998 retry.go:31] will retry after 3.069821881s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-747000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-747000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.490667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-747000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1025 16:06:30.142858   10998 retry.go:31] will retry after 6.591382547s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-747000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-747000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.810625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-747000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1025 16:06:36.850912   10998 retry.go:31] will retry after 10.551404721s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-747000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-747000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.090791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-747000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1025 16:06:47.524755   10998 retry.go:31] will retry after 9.703501206s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-747000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-747000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.114667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-747000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1025 16:06:57.344908   10998 retry.go:31] will retry after 12.966379456s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-747000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-747000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.512459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-747000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1025 16:07:10.425565   10998 retry.go:31] will retry after 26.86827695s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-747000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-747000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.259292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-747000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-747000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-747000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (61.675ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-747000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-747000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-747000 -- exec  -- nslookup kubernetes.io: exit status 1 (61.953375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-747000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-747000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-747000 -- exec  -- nslookup kubernetes.default: exit status 1 (61.452875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-747000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-747000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-747000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (62.591875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-747000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-747000 -n multinode-747000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-747000 -n multinode-747000: exit status 7 (34.5145ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-747000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (79.54s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-747000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-747000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (62.209583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-747000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-747000 -n multinode-747000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-747000 -n multinode-747000: exit status 7 (34.670875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-747000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-747000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-747000 -v 3 --alsologtostderr: exit status 83 (47.0045ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-747000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-747000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:07:37.820944   12414 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:07:37.821151   12414 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:07:37.821153   12414 out.go:358] Setting ErrFile to fd 2...
	I1025 16:07:37.821156   12414 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:07:37.821277   12414 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:07:37.821529   12414 mustload.go:65] Loading cluster: multinode-747000
	I1025 16:07:37.821765   12414 config.go:182] Loaded profile config "multinode-747000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:07:37.825828   12414 out.go:177] * The control-plane node multinode-747000 host is not running: state=Stopped
	I1025 16:07:37.829808   12414 out.go:177]   To start a cluster, run: "minikube start -p multinode-747000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-747000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-747000 -n multinode-747000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-747000 -n multinode-747000: exit status 7 (34.376584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-747000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-747000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-747000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.734083ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-747000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-747000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-747000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-747000 -n multinode-747000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-747000 -n multinode-747000: exit status 7 (34.667458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-747000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-747000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-747000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"multinode-747000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMN
UMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"multinode-747000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVe
rsion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\"
:\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-747000 -n multinode-747000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-747000 -n multinode-747000: exit status 7 (34.929917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-747000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-747000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-747000 status --output json --alsologtostderr: exit status 7 (35.216458ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-747000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:07:38.052409   12426 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:07:38.052597   12426 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:07:38.052600   12426 out.go:358] Setting ErrFile to fd 2...
	I1025 16:07:38.052602   12426 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:07:38.052727   12426 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:07:38.052845   12426 out.go:352] Setting JSON to true
	I1025 16:07:38.052856   12426 mustload.go:65] Loading cluster: multinode-747000
	I1025 16:07:38.052910   12426 notify.go:220] Checking for updates...
	I1025 16:07:38.053054   12426 config.go:182] Loaded profile config "multinode-747000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:07:38.053062   12426 status.go:174] checking status of multinode-747000 ...
	I1025 16:07:38.053320   12426 status.go:371] multinode-747000 host status = "Stopped" (err=<nil>)
	I1025 16:07:38.053323   12426 status.go:384] host is not running, skipping remaining checks
	I1025 16:07:38.053325   12426 status.go:176] multinode-747000 status: &{Name:multinode-747000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-747000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-747000 -n multinode-747000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-747000 -n multinode-747000: exit status 7 (34.475333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-747000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-747000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-747000 node stop m03: exit status 85 (53.954459ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-747000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-747000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-747000 status: exit status 7 (35.415417ms)

                                                
                                                
-- stdout --
	multinode-747000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-747000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-747000 status --alsologtostderr: exit status 7 (35.356166ms)

                                                
                                                
-- stdout --
	multinode-747000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:07:38.212548   12434 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:07:38.212735   12434 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:07:38.212739   12434 out.go:358] Setting ErrFile to fd 2...
	I1025 16:07:38.212741   12434 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:07:38.212901   12434 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:07:38.213034   12434 out.go:352] Setting JSON to false
	I1025 16:07:38.213045   12434 mustload.go:65] Loading cluster: multinode-747000
	I1025 16:07:38.213096   12434 notify.go:220] Checking for updates...
	I1025 16:07:38.213260   12434 config.go:182] Loaded profile config "multinode-747000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:07:38.213268   12434 status.go:174] checking status of multinode-747000 ...
	I1025 16:07:38.213524   12434 status.go:371] multinode-747000 host status = "Stopped" (err=<nil>)
	I1025 16:07:38.213528   12434 status.go:384] host is not running, skipping remaining checks
	I1025 16:07:38.213530   12434 status.go:176] multinode-747000 status: &{Name:multinode-747000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-747000 status --alsologtostderr": multinode-747000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-747000 -n multinode-747000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-747000 -n multinode-747000: exit status 7 (34.606166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-747000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.16s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (45.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-747000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-747000 node start m03 -v=7 --alsologtostderr: exit status 85 (51.121666ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:07:38.282464   12438 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:07:38.282890   12438 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:07:38.282893   12438 out.go:358] Setting ErrFile to fd 2...
	I1025 16:07:38.282896   12438 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:07:38.283067   12438 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:07:38.283301   12438 mustload.go:65] Loading cluster: multinode-747000
	I1025 16:07:38.283492   12438 config.go:182] Loaded profile config "multinode-747000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:07:38.287617   12438 out.go:201] 
	W1025 16:07:38.291628   12438 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W1025 16:07:38.291634   12438 out.go:270] * 
	* 
	W1025 16:07:38.293449   12438 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 16:07:38.296517   12438 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I1025 16:07:38.282464   12438 out.go:345] Setting OutFile to fd 1 ...
I1025 16:07:38.282890   12438 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 16:07:38.282893   12438 out.go:358] Setting ErrFile to fd 2...
I1025 16:07:38.282896   12438 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 16:07:38.283067   12438 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
I1025 16:07:38.283301   12438 mustload.go:65] Loading cluster: multinode-747000
I1025 16:07:38.283492   12438 config.go:182] Loaded profile config "multinode-747000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1025 16:07:38.287617   12438 out.go:201] 
W1025 16:07:38.291628   12438 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W1025 16:07:38.291634   12438 out.go:270] * 
* 
W1025 16:07:38.293449   12438 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1025 16:07:38.296517   12438 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-747000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-747000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-747000 status -v=7 --alsologtostderr: exit status 7 (34.546ms)

                                                
                                                
-- stdout --
	multinode-747000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:07:38.333297   12440 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:07:38.333502   12440 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:07:38.333505   12440 out.go:358] Setting ErrFile to fd 2...
	I1025 16:07:38.333508   12440 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:07:38.333644   12440 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:07:38.333770   12440 out.go:352] Setting JSON to false
	I1025 16:07:38.333779   12440 mustload.go:65] Loading cluster: multinode-747000
	I1025 16:07:38.333835   12440 notify.go:220] Checking for updates...
	I1025 16:07:38.333985   12440 config.go:182] Loaded profile config "multinode-747000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:07:38.333994   12440 status.go:174] checking status of multinode-747000 ...
	I1025 16:07:38.334244   12440 status.go:371] multinode-747000 host status = "Stopped" (err=<nil>)
	I1025 16:07:38.334247   12440 status.go:384] host is not running, skipping remaining checks
	I1025 16:07:38.334249   12440 status.go:176] multinode-747000 status: &{Name:multinode-747000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1025 16:07:38.335145   10998 retry.go:31] will retry after 1.015294421s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-747000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-747000 status -v=7 --alsologtostderr: exit status 7 (81.156375ms)

                                                
                                                
-- stdout --
	multinode-747000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:07:39.431808   12442 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:07:39.432025   12442 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:07:39.432029   12442 out.go:358] Setting ErrFile to fd 2...
	I1025 16:07:39.432031   12442 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:07:39.432218   12442 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:07:39.432366   12442 out.go:352] Setting JSON to false
	I1025 16:07:39.432378   12442 mustload.go:65] Loading cluster: multinode-747000
	I1025 16:07:39.432419   12442 notify.go:220] Checking for updates...
	I1025 16:07:39.432617   12442 config.go:182] Loaded profile config "multinode-747000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:07:39.432626   12442 status.go:174] checking status of multinode-747000 ...
	I1025 16:07:39.432932   12442 status.go:371] multinode-747000 host status = "Stopped" (err=<nil>)
	I1025 16:07:39.432936   12442 status.go:384] host is not running, skipping remaining checks
	I1025 16:07:39.432939   12442 status.go:176] multinode-747000 status: &{Name:multinode-747000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1025 16:07:39.433963   10998 retry.go:31] will retry after 1.542918391s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-747000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-747000 status -v=7 --alsologtostderr: exit status 7 (79.28125ms)

                                                
                                                
-- stdout --
	multinode-747000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:07:41.056442   12444 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:07:41.056655   12444 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:07:41.056659   12444 out.go:358] Setting ErrFile to fd 2...
	I1025 16:07:41.056662   12444 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:07:41.056821   12444 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:07:41.056980   12444 out.go:352] Setting JSON to false
	I1025 16:07:41.056993   12444 mustload.go:65] Loading cluster: multinode-747000
	I1025 16:07:41.057026   12444 notify.go:220] Checking for updates...
	I1025 16:07:41.057231   12444 config.go:182] Loaded profile config "multinode-747000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:07:41.057240   12444 status.go:174] checking status of multinode-747000 ...
	I1025 16:07:41.057531   12444 status.go:371] multinode-747000 host status = "Stopped" (err=<nil>)
	I1025 16:07:41.057535   12444 status.go:384] host is not running, skipping remaining checks
	I1025 16:07:41.057538   12444 status.go:176] multinode-747000 status: &{Name:multinode-747000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1025 16:07:41.058539   10998 retry.go:31] will retry after 1.844675542s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-747000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-747000 status -v=7 --alsologtostderr: exit status 7 (83.340916ms)

                                                
                                                
-- stdout --
	multinode-747000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:07:42.986786   12446 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:07:42.987017   12446 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:07:42.987021   12446 out.go:358] Setting ErrFile to fd 2...
	I1025 16:07:42.987024   12446 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:07:42.987199   12446 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:07:42.987338   12446 out.go:352] Setting JSON to false
	I1025 16:07:42.987351   12446 mustload.go:65] Loading cluster: multinode-747000
	I1025 16:07:42.987384   12446 notify.go:220] Checking for updates...
	I1025 16:07:42.987608   12446 config.go:182] Loaded profile config "multinode-747000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:07:42.987617   12446 status.go:174] checking status of multinode-747000 ...
	I1025 16:07:42.987896   12446 status.go:371] multinode-747000 host status = "Stopped" (err=<nil>)
	I1025 16:07:42.987900   12446 status.go:384] host is not running, skipping remaining checks
	I1025 16:07:42.987903   12446 status.go:176] multinode-747000 status: &{Name:multinode-747000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1025 16:07:42.988918   10998 retry.go:31] will retry after 3.878037393s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-747000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-747000 status -v=7 --alsologtostderr: exit status 7 (79.114167ms)

                                                
                                                
-- stdout --
	multinode-747000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:07:46.946409   12448 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:07:46.946649   12448 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:07:46.946653   12448 out.go:358] Setting ErrFile to fd 2...
	I1025 16:07:46.946656   12448 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:07:46.946794   12448 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:07:46.946930   12448 out.go:352] Setting JSON to false
	I1025 16:07:46.946943   12448 mustload.go:65] Loading cluster: multinode-747000
	I1025 16:07:46.946979   12448 notify.go:220] Checking for updates...
	I1025 16:07:46.947173   12448 config.go:182] Loaded profile config "multinode-747000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:07:46.947183   12448 status.go:174] checking status of multinode-747000 ...
	I1025 16:07:46.947464   12448 status.go:371] multinode-747000 host status = "Stopped" (err=<nil>)
	I1025 16:07:46.947468   12448 status.go:384] host is not running, skipping remaining checks
	I1025 16:07:46.947471   12448 status.go:176] multinode-747000 status: &{Name:multinode-747000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1025 16:07:46.948502   10998 retry.go:31] will retry after 3.147219323s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-747000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-747000 status -v=7 --alsologtostderr: exit status 7 (80.0625ms)

                                                
                                                
-- stdout --
	multinode-747000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:07:50.176080   12450 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:07:50.176286   12450 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:07:50.176290   12450 out.go:358] Setting ErrFile to fd 2...
	I1025 16:07:50.176293   12450 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:07:50.176445   12450 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:07:50.176610   12450 out.go:352] Setting JSON to false
	I1025 16:07:50.176622   12450 mustload.go:65] Loading cluster: multinode-747000
	I1025 16:07:50.176656   12450 notify.go:220] Checking for updates...
	I1025 16:07:50.176862   12450 config.go:182] Loaded profile config "multinode-747000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:07:50.176871   12450 status.go:174] checking status of multinode-747000 ...
	I1025 16:07:50.177164   12450 status.go:371] multinode-747000 host status = "Stopped" (err=<nil>)
	I1025 16:07:50.177168   12450 status.go:384] host is not running, skipping remaining checks
	I1025 16:07:50.177171   12450 status.go:176] multinode-747000 status: &{Name:multinode-747000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1025 16:07:50.178152   10998 retry.go:31] will retry after 4.994162898s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-747000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-747000 status -v=7 --alsologtostderr: exit status 7 (80.207875ms)

                                                
                                                
-- stdout --
	multinode-747000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:07:55.252779   12452 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:07:55.252983   12452 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:07:55.252987   12452 out.go:358] Setting ErrFile to fd 2...
	I1025 16:07:55.252990   12452 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:07:55.253171   12452 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:07:55.253331   12452 out.go:352] Setting JSON to false
	I1025 16:07:55.253343   12452 mustload.go:65] Loading cluster: multinode-747000
	I1025 16:07:55.253386   12452 notify.go:220] Checking for updates...
	I1025 16:07:55.253589   12452 config.go:182] Loaded profile config "multinode-747000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:07:55.253598   12452 status.go:174] checking status of multinode-747000 ...
	I1025 16:07:55.253875   12452 status.go:371] multinode-747000 host status = "Stopped" (err=<nil>)
	I1025 16:07:55.253880   12452 status.go:384] host is not running, skipping remaining checks
	I1025 16:07:55.253882   12452 status.go:176] multinode-747000 status: &{Name:multinode-747000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1025 16:07:55.254901   10998 retry.go:31] will retry after 14.304967702s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-747000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-747000 status -v=7 --alsologtostderr: exit status 7 (79.506792ms)

                                                
                                                
-- stdout --
	multinode-747000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:08:09.639594   12457 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:08:09.639811   12457 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:08:09.639815   12457 out.go:358] Setting ErrFile to fd 2...
	I1025 16:08:09.639818   12457 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:08:09.639981   12457 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:08:09.640141   12457 out.go:352] Setting JSON to false
	I1025 16:08:09.640154   12457 mustload.go:65] Loading cluster: multinode-747000
	I1025 16:08:09.640196   12457 notify.go:220] Checking for updates...
	I1025 16:08:09.640420   12457 config.go:182] Loaded profile config "multinode-747000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:08:09.640431   12457 status.go:174] checking status of multinode-747000 ...
	I1025 16:08:09.640743   12457 status.go:371] multinode-747000 host status = "Stopped" (err=<nil>)
	I1025 16:08:09.640747   12457 status.go:384] host is not running, skipping remaining checks
	I1025 16:08:09.640750   12457 status.go:176] multinode-747000 status: &{Name:multinode-747000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1025 16:08:09.641748   10998 retry.go:31] will retry after 14.367691552s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-747000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-747000 status -v=7 --alsologtostderr: exit status 7 (79.353209ms)

                                                
                                                
-- stdout --
	multinode-747000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:08:24.088951   12459 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:08:24.089176   12459 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:08:24.089180   12459 out.go:358] Setting ErrFile to fd 2...
	I1025 16:08:24.089183   12459 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:08:24.089355   12459 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:08:24.089499   12459 out.go:352] Setting JSON to false
	I1025 16:08:24.089511   12459 mustload.go:65] Loading cluster: multinode-747000
	I1025 16:08:24.089552   12459 notify.go:220] Checking for updates...
	I1025 16:08:24.089792   12459 config.go:182] Loaded profile config "multinode-747000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:08:24.089802   12459 status.go:174] checking status of multinode-747000 ...
	I1025 16:08:24.090107   12459 status.go:371] multinode-747000 host status = "Stopped" (err=<nil>)
	I1025 16:08:24.090111   12459 status.go:384] host is not running, skipping remaining checks
	I1025 16:08:24.090114   12459 status.go:176] multinode-747000 status: &{Name:multinode-747000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-747000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-747000 -n multinode-747000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-747000 -n multinode-747000: exit status 7 (36.094209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-747000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (45.88s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (9.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-747000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-747000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-747000: (3.654451667s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-747000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-747000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.235068959s)

                                                
                                                
-- stdout --
	* [multinode-747000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-747000" primary control-plane node in "multinode-747000" cluster
	* Restarting existing qemu2 VM for "multinode-747000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-747000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:08:27.887524   12483 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:08:27.887710   12483 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:08:27.887714   12483 out.go:358] Setting ErrFile to fd 2...
	I1025 16:08:27.887717   12483 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:08:27.887869   12483 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:08:27.889095   12483 out.go:352] Setting JSON to false
	I1025 16:08:27.909160   12483 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6945,"bootTime":1729890762,"procs":506,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 16:08:27.909241   12483 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 16:08:27.914103   12483 out.go:177] * [multinode-747000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 16:08:27.921071   12483 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 16:08:27.921143   12483 notify.go:220] Checking for updates...
	I1025 16:08:27.929005   12483 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:08:27.930394   12483 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 16:08:27.933016   12483 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 16:08:27.936072   12483 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	I1025 16:08:27.939078   12483 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 16:08:27.942313   12483 config.go:182] Loaded profile config "multinode-747000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:08:27.942377   12483 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 16:08:27.947050   12483 out.go:177] * Using the qemu2 driver based on existing profile
	I1025 16:08:27.954032   12483 start.go:297] selected driver: qemu2
	I1025 16:08:27.954039   12483 start.go:901] validating driver "qemu2" against &{Name:multinode-747000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-747000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:08:27.954106   12483 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 16:08:27.956680   12483 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 16:08:27.956705   12483 cni.go:84] Creating CNI manager for ""
	I1025 16:08:27.956730   12483 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1025 16:08:27.956777   12483 start.go:340] cluster config:
	{Name:multinode-747000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-747000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:08:27.961237   12483 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:08:27.969048   12483 out.go:177] * Starting "multinode-747000" primary control-plane node in "multinode-747000" cluster
	I1025 16:08:27.973000   12483 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1025 16:08:27.973012   12483 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1025 16:08:27.973022   12483 cache.go:56] Caching tarball of preloaded images
	I1025 16:08:27.973087   12483 preload.go:172] Found /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 16:08:27.973092   12483 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1025 16:08:27.973139   12483 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/multinode-747000/config.json ...
	I1025 16:08:27.973556   12483 start.go:360] acquireMachinesLock for multinode-747000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:08:27.973602   12483 start.go:364] duration metric: took 40.459µs to acquireMachinesLock for "multinode-747000"
	I1025 16:08:27.973610   12483 start.go:96] Skipping create...Using existing machine configuration
	I1025 16:08:27.973614   12483 fix.go:54] fixHost starting: 
	I1025 16:08:27.973725   12483 fix.go:112] recreateIfNeeded on multinode-747000: state=Stopped err=<nil>
	W1025 16:08:27.973734   12483 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 16:08:27.982024   12483 out.go:177] * Restarting existing qemu2 VM for "multinode-747000" ...
	I1025 16:08:27.986009   12483 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:08:27.986047   12483 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/multinode-747000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/multinode-747000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/multinode-747000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:bc:9c:ed:fb:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/multinode-747000/disk.qcow2
	I1025 16:08:27.988281   12483 main.go:141] libmachine: STDOUT: 
	I1025 16:08:27.988300   12483 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:08:27.988324   12483 fix.go:56] duration metric: took 14.708792ms for fixHost
	I1025 16:08:27.988328   12483 start.go:83] releasing machines lock for "multinode-747000", held for 14.722041ms
	W1025 16:08:27.988335   12483 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:08:27.988377   12483 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:08:27.988382   12483 start.go:729] Will try again in 5 seconds ...
	I1025 16:08:32.990555   12483 start.go:360] acquireMachinesLock for multinode-747000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:08:32.990937   12483 start.go:364] duration metric: took 302.25µs to acquireMachinesLock for "multinode-747000"
	I1025 16:08:32.991050   12483 start.go:96] Skipping create...Using existing machine configuration
	I1025 16:08:32.991069   12483 fix.go:54] fixHost starting: 
	I1025 16:08:32.991739   12483 fix.go:112] recreateIfNeeded on multinode-747000: state=Stopped err=<nil>
	W1025 16:08:32.991764   12483 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 16:08:32.999165   12483 out.go:177] * Restarting existing qemu2 VM for "multinode-747000" ...
	I1025 16:08:33.003244   12483 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:08:33.003568   12483 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/multinode-747000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/multinode-747000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/multinode-747000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:bc:9c:ed:fb:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/multinode-747000/disk.qcow2
	I1025 16:08:33.013392   12483 main.go:141] libmachine: STDOUT: 
	I1025 16:08:33.013464   12483 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:08:33.013556   12483 fix.go:56] duration metric: took 22.487209ms for fixHost
	I1025 16:08:33.013581   12483 start.go:83] releasing machines lock for "multinode-747000", held for 22.624ms
	W1025 16:08:33.013750   12483 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-747000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-747000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:08:33.022119   12483 out.go:201] 
	W1025 16:08:33.026057   12483 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:08:33.026080   12483 out.go:270] * 
	* 
	W1025 16:08:33.028231   12483 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 16:08:33.038141   12483 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-747000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-747000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-747000 -n multinode-747000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-747000 -n multinode-747000: exit status 7 (36.120667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-747000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (9.04s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-747000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-747000 node delete m03: exit status 83 (47.339667ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-747000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-747000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-747000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-747000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-747000 status --alsologtostderr: exit status 7 (34.785833ms)

                                                
                                                
-- stdout --
	multinode-747000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:08:33.243231   12497 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:08:33.243405   12497 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:08:33.243408   12497 out.go:358] Setting ErrFile to fd 2...
	I1025 16:08:33.243410   12497 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:08:33.243542   12497 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:08:33.243654   12497 out.go:352] Setting JSON to false
	I1025 16:08:33.243664   12497 mustload.go:65] Loading cluster: multinode-747000
	I1025 16:08:33.243726   12497 notify.go:220] Checking for updates...
	I1025 16:08:33.243868   12497 config.go:182] Loaded profile config "multinode-747000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:08:33.243876   12497 status.go:174] checking status of multinode-747000 ...
	I1025 16:08:33.244123   12497 status.go:371] multinode-747000 host status = "Stopped" (err=<nil>)
	I1025 16:08:33.244126   12497 status.go:384] host is not running, skipping remaining checks
	I1025 16:08:33.244128   12497 status.go:176] multinode-747000 status: &{Name:multinode-747000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-747000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-747000 -n multinode-747000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-747000 -n multinode-747000: exit status 7 (33.985333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-747000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.12s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-747000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-747000 stop: (2.863325708s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-747000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-747000 status: exit status 7 (72.225833ms)

                                                
                                                
-- stdout --
	multinode-747000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-747000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-747000 status --alsologtostderr: exit status 7 (36.5055ms)

                                                
                                                
-- stdout --
	multinode-747000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:08:36.249811   12523 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:08:36.249982   12523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:08:36.249986   12523 out.go:358] Setting ErrFile to fd 2...
	I1025 16:08:36.249988   12523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:08:36.250122   12523 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:08:36.250243   12523 out.go:352] Setting JSON to false
	I1025 16:08:36.250254   12523 mustload.go:65] Loading cluster: multinode-747000
	I1025 16:08:36.250318   12523 notify.go:220] Checking for updates...
	I1025 16:08:36.250476   12523 config.go:182] Loaded profile config "multinode-747000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:08:36.250484   12523 status.go:174] checking status of multinode-747000 ...
	I1025 16:08:36.250708   12523 status.go:371] multinode-747000 host status = "Stopped" (err=<nil>)
	I1025 16:08:36.250714   12523 status.go:384] host is not running, skipping remaining checks
	I1025 16:08:36.250716   12523 status.go:176] multinode-747000 status: &{Name:multinode-747000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-747000 status --alsologtostderr": multinode-747000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-747000 status --alsologtostderr": multinode-747000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-747000 -n multinode-747000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-747000 -n multinode-747000: exit status 7 (34.720708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-747000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-747000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-747000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.186717083s)

                                                
                                                
-- stdout --
	* [multinode-747000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-747000" primary control-plane node in "multinode-747000" cluster
	* Restarting existing qemu2 VM for "multinode-747000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-747000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:08:36.318576   12527 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:08:36.318737   12527 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:08:36.318740   12527 out.go:358] Setting ErrFile to fd 2...
	I1025 16:08:36.318742   12527 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:08:36.318859   12527 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:08:36.319916   12527 out.go:352] Setting JSON to false
	I1025 16:08:36.337458   12527 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6954,"bootTime":1729890762,"procs":506,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 16:08:36.337537   12527 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 16:08:36.342957   12527 out.go:177] * [multinode-747000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 16:08:36.350945   12527 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 16:08:36.350996   12527 notify.go:220] Checking for updates...
	I1025 16:08:36.357892   12527 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:08:36.360964   12527 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 16:08:36.364880   12527 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 16:08:36.367921   12527 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	I1025 16:08:36.369208   12527 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 16:08:36.372227   12527 config.go:182] Loaded profile config "multinode-747000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:08:36.372512   12527 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 16:08:36.376846   12527 out.go:177] * Using the qemu2 driver based on existing profile
	I1025 16:08:36.381895   12527 start.go:297] selected driver: qemu2
	I1025 16:08:36.381903   12527 start.go:901] validating driver "qemu2" against &{Name:multinode-747000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-747000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:08:36.381967   12527 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 16:08:36.384516   12527 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 16:08:36.384540   12527 cni.go:84] Creating CNI manager for ""
	I1025 16:08:36.384565   12527 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1025 16:08:36.384613   12527 start.go:340] cluster config:
	{Name:multinode-747000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-747000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:08:36.388977   12527 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:08:36.396822   12527 out.go:177] * Starting "multinode-747000" primary control-plane node in "multinode-747000" cluster
	I1025 16:08:36.400882   12527 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1025 16:08:36.400904   12527 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1025 16:08:36.400915   12527 cache.go:56] Caching tarball of preloaded images
	I1025 16:08:36.400984   12527 preload.go:172] Found /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 16:08:36.400991   12527 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1025 16:08:36.401045   12527 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/multinode-747000/config.json ...
	I1025 16:08:36.401474   12527 start.go:360] acquireMachinesLock for multinode-747000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:08:36.401505   12527 start.go:364] duration metric: took 24.708µs to acquireMachinesLock for "multinode-747000"
	I1025 16:08:36.401513   12527 start.go:96] Skipping create...Using existing machine configuration
	I1025 16:08:36.401518   12527 fix.go:54] fixHost starting: 
	I1025 16:08:36.401629   12527 fix.go:112] recreateIfNeeded on multinode-747000: state=Stopped err=<nil>
	W1025 16:08:36.401635   12527 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 16:08:36.409890   12527 out.go:177] * Restarting existing qemu2 VM for "multinode-747000" ...
	I1025 16:08:36.413844   12527 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:08:36.413879   12527 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/multinode-747000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/multinode-747000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/multinode-747000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:bc:9c:ed:fb:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/multinode-747000/disk.qcow2
	I1025 16:08:36.416048   12527 main.go:141] libmachine: STDOUT: 
	I1025 16:08:36.416066   12527 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:08:36.416091   12527 fix.go:56] duration metric: took 14.573834ms for fixHost
	I1025 16:08:36.416094   12527 start.go:83] releasing machines lock for "multinode-747000", held for 14.585042ms
	W1025 16:08:36.416099   12527 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:08:36.416137   12527 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:08:36.416141   12527 start.go:729] Will try again in 5 seconds ...
	I1025 16:08:41.418347   12527 start.go:360] acquireMachinesLock for multinode-747000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:08:41.418813   12527 start.go:364] duration metric: took 318.208µs to acquireMachinesLock for "multinode-747000"
	I1025 16:08:41.418952   12527 start.go:96] Skipping create...Using existing machine configuration
	I1025 16:08:41.418966   12527 fix.go:54] fixHost starting: 
	I1025 16:08:41.419652   12527 fix.go:112] recreateIfNeeded on multinode-747000: state=Stopped err=<nil>
	W1025 16:08:41.419672   12527 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 16:08:41.427141   12527 out.go:177] * Restarting existing qemu2 VM for "multinode-747000" ...
	I1025 16:08:41.431182   12527 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:08:41.431395   12527 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/multinode-747000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/multinode-747000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/multinode-747000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:bc:9c:ed:fb:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/multinode-747000/disk.qcow2
	I1025 16:08:41.439261   12527 main.go:141] libmachine: STDOUT: 
	I1025 16:08:41.439321   12527 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:08:41.439391   12527 fix.go:56] duration metric: took 20.423ms for fixHost
	I1025 16:08:41.439409   12527 start.go:83] releasing machines lock for "multinode-747000", held for 20.569166ms
	W1025 16:08:41.439596   12527 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-747000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-747000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:08:41.447248   12527 out.go:201] 
	W1025 16:08:41.450212   12527 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:08:41.450245   12527 out.go:270] * 
	* 
	W1025 16:08:41.452564   12527 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 16:08:41.461204   12527 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-747000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-747000 -n multinode-747000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-747000 -n multinode-747000: exit status 7 (68.271584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-747000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-747000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-747000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-747000-m01 --driver=qemu2 : exit status 80 (9.977393708s)

                                                
                                                
-- stdout --
	* [multinode-747000-m01] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-747000-m01" primary control-plane node in "multinode-747000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-747000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-747000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-747000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-747000-m02 --driver=qemu2 : exit status 80 (10.003246167s)

                                                
                                                
-- stdout --
	* [multinode-747000-m02] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-747000-m02" primary control-plane node in "multinode-747000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-747000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-747000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-747000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-747000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-747000: exit status 83 (87.536167ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-747000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-747000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-747000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-747000 -n multinode-747000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-747000 -n multinode-747000: exit status 7 (35.784792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-747000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.23s)

                                                
                                    
x
+
TestPreload (10.14s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-968000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-968000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.982374959s)

                                                
                                                
-- stdout --
	* [test-preload-968000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-968000" primary control-plane node in "test-preload-968000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-968000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:09:01.917680   12583 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:09:01.917844   12583 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:09:01.917847   12583 out.go:358] Setting ErrFile to fd 2...
	I1025 16:09:01.917850   12583 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:09:01.917990   12583 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:09:01.919164   12583 out.go:352] Setting JSON to false
	I1025 16:09:01.936765   12583 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6979,"bootTime":1729890762,"procs":506,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 16:09:01.936842   12583 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 16:09:01.942992   12583 out.go:177] * [test-preload-968000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 16:09:01.948620   12583 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 16:09:01.948678   12583 notify.go:220] Checking for updates...
	I1025 16:09:01.954947   12583 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:09:01.956388   12583 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 16:09:01.960930   12583 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 16:09:01.963964   12583 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	I1025 16:09:01.965407   12583 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 16:09:01.968332   12583 config.go:182] Loaded profile config "multinode-747000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:09:01.968371   12583 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 16:09:01.972896   12583 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 16:09:01.977910   12583 start.go:297] selected driver: qemu2
	I1025 16:09:01.977916   12583 start.go:901] validating driver "qemu2" against <nil>
	I1025 16:09:01.977924   12583 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 16:09:01.980359   12583 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 16:09:01.984931   12583 out.go:177] * Automatically selected the socket_vmnet network
	I1025 16:09:01.986445   12583 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 16:09:01.986467   12583 cni.go:84] Creating CNI manager for ""
	I1025 16:09:01.986495   12583 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 16:09:01.986502   12583 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 16:09:01.986530   12583 start.go:340] cluster config:
	{Name:test-preload-968000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-968000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:09:01.991178   12583 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:09:01.998955   12583 out.go:177] * Starting "test-preload-968000" primary control-plane node in "test-preload-968000" cluster
	I1025 16:09:02.002945   12583 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I1025 16:09:02.003023   12583 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/test-preload-968000/config.json ...
	I1025 16:09:02.003039   12583 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/test-preload-968000/config.json: {Name:mke0726473777631f0f07d106f933c35191798e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:09:02.003037   12583 cache.go:107] acquiring lock: {Name:mka77912f6392ad84bb54095bdca3bc598633fbe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:09:02.003048   12583 cache.go:107] acquiring lock: {Name:mkf17abe2a494c98caeb7dd57923c6230e05330c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:09:02.003078   12583 cache.go:107] acquiring lock: {Name:mk8a94030a535fd099a9ac5a53ce488afedb8fcb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:09:02.003164   12583 cache.go:107] acquiring lock: {Name:mk6c2e12e402287efa3f9fc1b0b11026079f880e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:09:02.003225   12583 cache.go:107] acquiring lock: {Name:mk159d354911f02ecaca26be1afd96b6e5b330a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:09:02.003238   12583 cache.go:107] acquiring lock: {Name:mkcacac70bdc3f099e88646b6412db3c30f40a59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:09:02.003289   12583 cache.go:107] acquiring lock: {Name:mk7521be4cd6c58e41895974d14c02cca46e61fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:09:02.003320   12583 cache.go:107] acquiring lock: {Name:mkaabc31719af712838a9e0624a9cd8514bb24ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:09:02.003623   12583 start.go:360] acquireMachinesLock for test-preload-968000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:09:02.003644   12583 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1025 16:09:02.003719   12583 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1025 16:09:02.003829   12583 start.go:364] duration metric: took 195.083µs to acquireMachinesLock for "test-preload-968000"
	I1025 16:09:02.003843   12583 start.go:93] Provisioning new machine with config: &{Name:test-preload-968000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-968000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:09:02.003871   12583 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:09:02.003956   12583 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 16:09:02.004025   12583 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1025 16:09:02.004033   12583 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1025 16:09:02.004073   12583 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1025 16:09:02.008260   12583 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1025 16:09:02.008276   12583 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1025 16:09:02.011918   12583 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 16:09:02.015091   12583 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1025 16:09:02.015124   12583 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 16:09:02.015234   12583 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1025 16:09:02.015846   12583 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1025 16:09:02.018078   12583 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1025 16:09:02.018214   12583 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1025 16:09:02.018307   12583 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1025 16:09:02.018322   12583 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1025 16:09:02.029539   12583 start.go:159] libmachine.API.Create for "test-preload-968000" (driver="qemu2")
	I1025 16:09:02.029561   12583 client.go:168] LocalClient.Create starting
	I1025 16:09:02.029650   12583 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:09:02.029688   12583 main.go:141] libmachine: Decoding PEM data...
	I1025 16:09:02.029701   12583 main.go:141] libmachine: Parsing certificate...
	I1025 16:09:02.029738   12583 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:09:02.029770   12583 main.go:141] libmachine: Decoding PEM data...
	I1025 16:09:02.029779   12583 main.go:141] libmachine: Parsing certificate...
	I1025 16:09:02.030158   12583 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:09:02.231333   12583 main.go:141] libmachine: Creating SSH key...
	I1025 16:09:02.291611   12583 main.go:141] libmachine: Creating Disk image...
	I1025 16:09:02.291637   12583 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:09:02.291870   12583 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/test-preload-968000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/test-preload-968000/disk.qcow2
	I1025 16:09:02.302472   12583 main.go:141] libmachine: STDOUT: 
	I1025 16:09:02.302516   12583 main.go:141] libmachine: STDERR: 
	I1025 16:09:02.302576   12583 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/test-preload-968000/disk.qcow2 +20000M
	I1025 16:09:02.311353   12583 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:09:02.311370   12583 main.go:141] libmachine: STDERR: 
	I1025 16:09:02.311383   12583 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/test-preload-968000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/test-preload-968000/disk.qcow2
	I1025 16:09:02.311387   12583 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:09:02.311397   12583 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:09:02.311427   12583 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/test-preload-968000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/test-preload-968000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/test-preload-968000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:84:d5:0d:cd:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/test-preload-968000/disk.qcow2
	I1025 16:09:02.313345   12583 main.go:141] libmachine: STDOUT: 
	I1025 16:09:02.313359   12583 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:09:02.313378   12583 client.go:171] duration metric: took 283.81325ms to LocalClient.Create
	I1025 16:09:02.472028   12583 cache.go:162] opening:  /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1025 16:09:02.525157   12583 cache.go:162] opening:  /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	W1025 16:09:02.559374   12583 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1025 16:09:02.559408   12583 cache.go:162] opening:  /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1025 16:09:02.658717   12583 cache.go:162] opening:  /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1025 16:09:02.680997   12583 cache.go:157] /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I1025 16:09:02.681011   12583 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 677.866666ms
	I1025 16:09:02.681021   12583 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I1025 16:09:02.731616   12583 cache.go:162] opening:  /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I1025 16:09:02.748993   12583 cache.go:162] opening:  /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I1025 16:09:02.869521   12583 cache.go:162] opening:  /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W1025 16:09:02.950137   12583 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1025 16:09:02.950220   12583 cache.go:162] opening:  /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1025 16:09:03.401715   12583 cache.go:157] /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1025 16:09:03.401785   12583 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.398754666s
	I1025 16:09:03.401821   12583 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1025 16:09:04.313591   12583 start.go:128] duration metric: took 2.309712916s to createHost
	I1025 16:09:04.313642   12583 start.go:83] releasing machines lock for "test-preload-968000", held for 2.3098165s
	W1025 16:09:04.313707   12583 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:09:04.330043   12583 out.go:177] * Deleting "test-preload-968000" in qemu2 ...
	W1025 16:09:04.360155   12583 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:09:04.360180   12583 start.go:729] Will try again in 5 seconds ...
	I1025 16:09:04.360380   12583 cache.go:157] /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I1025 16:09:04.360403   12583 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.357129958s
	I1025 16:09:04.360428   12583 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I1025 16:09:05.278539   12583 cache.go:157] /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I1025 16:09:05.278607   12583 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.275570875s
	I1025 16:09:05.278649   12583 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I1025 16:09:07.655238   12583 cache.go:157] /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I1025 16:09:07.655290   12583 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.652129542s
	I1025 16:09:07.655314   12583 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I1025 16:09:08.601408   12583 cache.go:157] /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I1025 16:09:08.601460   12583 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 6.598462584s
	I1025 16:09:08.601487   12583 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I1025 16:09:08.971965   12583 cache.go:157] /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I1025 16:09:08.972006   12583 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.96885925s
	I1025 16:09:08.972028   12583 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I1025 16:09:09.360409   12583 start.go:360] acquireMachinesLock for test-preload-968000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:09:09.360882   12583 start.go:364] duration metric: took 404.542µs to acquireMachinesLock for "test-preload-968000"
	I1025 16:09:09.360982   12583 start.go:93] Provisioning new machine with config: &{Name:test-preload-968000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-968000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:09:09.361218   12583 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:09:09.378044   12583 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 16:09:09.428215   12583 start.go:159] libmachine.API.Create for "test-preload-968000" (driver="qemu2")
	I1025 16:09:09.428268   12583 client.go:168] LocalClient.Create starting
	I1025 16:09:09.428396   12583 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:09:09.428476   12583 main.go:141] libmachine: Decoding PEM data...
	I1025 16:09:09.428526   12583 main.go:141] libmachine: Parsing certificate...
	I1025 16:09:09.428582   12583 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:09:09.428638   12583 main.go:141] libmachine: Decoding PEM data...
	I1025 16:09:09.428654   12583 main.go:141] libmachine: Parsing certificate...
	I1025 16:09:09.429191   12583 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:09:09.596171   12583 main.go:141] libmachine: Creating SSH key...
	I1025 16:09:09.795336   12583 main.go:141] libmachine: Creating Disk image...
	I1025 16:09:09.795352   12583 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:09:09.795560   12583 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/test-preload-968000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/test-preload-968000/disk.qcow2
	I1025 16:09:09.806020   12583 main.go:141] libmachine: STDOUT: 
	I1025 16:09:09.806042   12583 main.go:141] libmachine: STDERR: 
	I1025 16:09:09.806123   12583 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/test-preload-968000/disk.qcow2 +20000M
	I1025 16:09:09.814917   12583 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:09:09.814934   12583 main.go:141] libmachine: STDERR: 
	I1025 16:09:09.814950   12583 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/test-preload-968000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/test-preload-968000/disk.qcow2
	I1025 16:09:09.814955   12583 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:09:09.814966   12583 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:09:09.815006   12583 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/test-preload-968000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/test-preload-968000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/test-preload-968000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:db:33:5b:d5:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/test-preload-968000/disk.qcow2
	I1025 16:09:09.816933   12583 main.go:141] libmachine: STDOUT: 
	I1025 16:09:09.816948   12583 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:09:09.816963   12583 client.go:171] duration metric: took 388.692625ms to LocalClient.Create
	I1025 16:09:11.817170   12583 start.go:128] duration metric: took 2.45593925s to createHost
	I1025 16:09:11.817211   12583 start.go:83] releasing machines lock for "test-preload-968000", held for 2.456322s
	W1025 16:09:11.817579   12583 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-968000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-968000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:09:11.832215   12583 out.go:201] 
	W1025 16:09:11.836292   12583 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:09:11.836356   12583 out.go:270] * 
	* 
	W1025 16:09:11.838756   12583 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 16:09:11.852183   12583 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-968000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-10-25 16:09:11.869969 -0700 PDT m=+675.126354084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-968000 -n test-preload-968000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-968000 -n test-preload-968000: exit status 7 (73.560375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-968000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-968000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-968000
--- FAIL: TestPreload (10.14s)

                                                
                                    
x
+
TestScheduledStopUnix (9.99s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-520000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-520000 --memory=2048 --driver=qemu2 : exit status 80 (9.83249725s)

                                                
                                                
-- stdout --
	* [scheduled-stop-520000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-520000" primary control-plane node in "scheduled-stop-520000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-520000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-520000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-520000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-520000" primary control-plane node in "scheduled-stop-520000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-520000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-520000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-10-25 16:09:21.858877 -0700 PDT m=+685.115331418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-520000 -n scheduled-stop-520000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-520000 -n scheduled-stop-520000: exit status 7 (75.413625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-520000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-520000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-520000
--- FAIL: TestScheduledStopUnix (9.99s)

                                                
                                    
x
+
TestSkaffold (12.54s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe1273401365 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe1273401365 version: (1.02034275s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-303000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-303000 --memory=2600 --driver=qemu2 : exit status 80 (9.836149292s)

                                                
                                                
-- stdout --
	* [skaffold-303000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-303000" primary control-plane node in "skaffold-303000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-303000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-303000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-303000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-303000" primary control-plane node in "skaffold-303000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-303000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-303000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-10-25 16:09:34.405525 -0700 PDT m=+697.662065918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-303000 -n skaffold-303000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-303000 -n skaffold-303000: exit status 7 (69.534667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-303000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-303000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-303000
--- FAIL: TestSkaffold (12.54s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (588.64s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2790709861 start -p running-upgrade-023000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2790709861 start -p running-upgrade-023000 --memory=2200 --vm-driver=qemu2 : (52.071886959s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-023000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-023000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m23.229125s)

                                                
                                                
-- stdout --
	* [running-upgrade-023000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-023000" primary control-plane node in "running-upgrade-023000" cluster
	* Updating the running qemu2 "running-upgrade-023000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:11:08.333326   12967 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:11:08.333641   12967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:11:08.333645   12967 out.go:358] Setting ErrFile to fd 2...
	I1025 16:11:08.333647   12967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:11:08.333793   12967 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:11:08.334841   12967 out.go:352] Setting JSON to false
	I1025 16:11:08.353718   12967 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7106,"bootTime":1729890762,"procs":509,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 16:11:08.353791   12967 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 16:11:08.359368   12967 out.go:177] * [running-upgrade-023000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 16:11:08.366327   12967 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 16:11:08.366377   12967 notify.go:220] Checking for updates...
	I1025 16:11:08.373324   12967 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:11:08.377342   12967 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 16:11:08.380370   12967 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 16:11:08.383340   12967 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	I1025 16:11:08.386354   12967 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 16:11:08.389681   12967 config.go:182] Loaded profile config "running-upgrade-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 16:11:08.393283   12967 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1025 16:11:08.396358   12967 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 16:11:08.399367   12967 out.go:177] * Using the qemu2 driver based on existing profile
	I1025 16:11:08.406314   12967 start.go:297] selected driver: qemu2
	I1025 16:11:08.406320   12967 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-023000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:62164 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-023000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1025 16:11:08.406365   12967 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 16:11:08.409078   12967 cni.go:84] Creating CNI manager for ""
	I1025 16:11:08.409193   12967 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 16:11:08.409230   12967 start.go:340] cluster config:
	{Name:running-upgrade-023000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:62164 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-023000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1025 16:11:08.409296   12967 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:11:08.417318   12967 out.go:177] * Starting "running-upgrade-023000" primary control-plane node in "running-upgrade-023000" cluster
	I1025 16:11:08.421395   12967 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1025 16:11:08.421413   12967 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1025 16:11:08.421424   12967 cache.go:56] Caching tarball of preloaded images
	I1025 16:11:08.421481   12967 preload.go:172] Found /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 16:11:08.421487   12967 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1025 16:11:08.421546   12967 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/running-upgrade-023000/config.json ...
	I1025 16:11:08.421985   12967 start.go:360] acquireMachinesLock for running-upgrade-023000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:11:08.422018   12967 start.go:364] duration metric: took 26.709µs to acquireMachinesLock for "running-upgrade-023000"
	I1025 16:11:08.422025   12967 start.go:96] Skipping create...Using existing machine configuration
	I1025 16:11:08.422030   12967 fix.go:54] fixHost starting: 
	I1025 16:11:08.422670   12967 fix.go:112] recreateIfNeeded on running-upgrade-023000: state=Running err=<nil>
	W1025 16:11:08.422679   12967 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 16:11:08.427295   12967 out.go:177] * Updating the running qemu2 "running-upgrade-023000" VM ...
	I1025 16:11:08.435312   12967 machine.go:93] provisionDockerMachine start ...
	I1025 16:11:08.435371   12967 main.go:141] libmachine: Using SSH client type: native
	I1025 16:11:08.435513   12967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103262480] 0x103264cc0 <nil>  [] 0s} localhost 62132 <nil> <nil>}
	I1025 16:11:08.435519   12967 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 16:11:08.496743   12967 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-023000
	
	I1025 16:11:08.496759   12967 buildroot.go:166] provisioning hostname "running-upgrade-023000"
	I1025 16:11:08.496836   12967 main.go:141] libmachine: Using SSH client type: native
	I1025 16:11:08.496947   12967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103262480] 0x103264cc0 <nil>  [] 0s} localhost 62132 <nil> <nil>}
	I1025 16:11:08.496955   12967 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-023000 && echo "running-upgrade-023000" | sudo tee /etc/hostname
	I1025 16:11:08.560837   12967 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-023000
	
	I1025 16:11:08.560895   12967 main.go:141] libmachine: Using SSH client type: native
	I1025 16:11:08.561007   12967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103262480] 0x103264cc0 <nil>  [] 0s} localhost 62132 <nil> <nil>}
	I1025 16:11:08.561018   12967 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-023000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-023000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-023000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 16:11:08.620168   12967 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 16:11:08.620179   12967 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19758-10490/.minikube CaCertPath:/Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19758-10490/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19758-10490/.minikube}
	I1025 16:11:08.620189   12967 buildroot.go:174] setting up certificates
	I1025 16:11:08.620193   12967 provision.go:84] configureAuth start
	I1025 16:11:08.620198   12967 provision.go:143] copyHostCerts
	I1025 16:11:08.620267   12967 exec_runner.go:144] found /Users/jenkins/minikube-integration/19758-10490/.minikube/ca.pem, removing ...
	I1025 16:11:08.620278   12967 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19758-10490/.minikube/ca.pem
	I1025 16:11:08.620430   12967 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19758-10490/.minikube/ca.pem (1078 bytes)
	I1025 16:11:08.620657   12967 exec_runner.go:144] found /Users/jenkins/minikube-integration/19758-10490/.minikube/cert.pem, removing ...
	I1025 16:11:08.620660   12967 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19758-10490/.minikube/cert.pem
	I1025 16:11:08.620701   12967 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19758-10490/.minikube/cert.pem (1123 bytes)
	I1025 16:11:08.620821   12967 exec_runner.go:144] found /Users/jenkins/minikube-integration/19758-10490/.minikube/key.pem, removing ...
	I1025 16:11:08.620824   12967 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19758-10490/.minikube/key.pem
	I1025 16:11:08.620867   12967 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19758-10490/.minikube/key.pem (1675 bytes)
	I1025 16:11:08.620974   12967 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-023000 san=[127.0.0.1 localhost minikube running-upgrade-023000]
	I1025 16:11:08.750320   12967 provision.go:177] copyRemoteCerts
	I1025 16:11:08.750375   12967 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 16:11:08.750393   12967 sshutil.go:53] new ssh client: &{IP:localhost Port:62132 SSHKeyPath:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/running-upgrade-023000/id_rsa Username:docker}
	I1025 16:11:08.783941   12967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 16:11:08.791502   12967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1025 16:11:08.798656   12967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 16:11:08.805408   12967 provision.go:87] duration metric: took 185.209375ms to configureAuth
	I1025 16:11:08.805418   12967 buildroot.go:189] setting minikube options for container-runtime
	I1025 16:11:08.805523   12967 config.go:182] Loaded profile config "running-upgrade-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 16:11:08.805563   12967 main.go:141] libmachine: Using SSH client type: native
	I1025 16:11:08.805657   12967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103262480] 0x103264cc0 <nil>  [] 0s} localhost 62132 <nil> <nil>}
	I1025 16:11:08.805662   12967 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 16:11:08.865709   12967 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1025 16:11:08.865719   12967 buildroot.go:70] root file system type: tmpfs
	I1025 16:11:08.865769   12967 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 16:11:08.865835   12967 main.go:141] libmachine: Using SSH client type: native
	I1025 16:11:08.865951   12967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103262480] 0x103264cc0 <nil>  [] 0s} localhost 62132 <nil> <nil>}
	I1025 16:11:08.865983   12967 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 16:11:08.929580   12967 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 16:11:08.929652   12967 main.go:141] libmachine: Using SSH client type: native
	I1025 16:11:08.929778   12967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103262480] 0x103264cc0 <nil>  [] 0s} localhost 62132 <nil> <nil>}
	I1025 16:11:08.929786   12967 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 16:11:08.989272   12967 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 16:11:08.989286   12967 machine.go:96] duration metric: took 553.972084ms to provisionDockerMachine
	I1025 16:11:08.989293   12967 start.go:293] postStartSetup for "running-upgrade-023000" (driver="qemu2")
	I1025 16:11:08.989299   12967 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 16:11:08.989370   12967 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 16:11:08.989380   12967 sshutil.go:53] new ssh client: &{IP:localhost Port:62132 SSHKeyPath:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/running-upgrade-023000/id_rsa Username:docker}
	I1025 16:11:09.022603   12967 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 16:11:09.024008   12967 info.go:137] Remote host: Buildroot 2021.02.12
	I1025 16:11:09.024015   12967 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19758-10490/.minikube/addons for local assets ...
	I1025 16:11:09.024078   12967 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19758-10490/.minikube/files for local assets ...
	I1025 16:11:09.024173   12967 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19758-10490/.minikube/files/etc/ssl/certs/109982.pem -> 109982.pem in /etc/ssl/certs
	I1025 16:11:09.024267   12967 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 16:11:09.027633   12967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/files/etc/ssl/certs/109982.pem --> /etc/ssl/certs/109982.pem (1708 bytes)
	I1025 16:11:09.035409   12967 start.go:296] duration metric: took 46.110667ms for postStartSetup
	I1025 16:11:09.035422   12967 fix.go:56] duration metric: took 613.397166ms for fixHost
	I1025 16:11:09.035464   12967 main.go:141] libmachine: Using SSH client type: native
	I1025 16:11:09.035560   12967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x103262480] 0x103264cc0 <nil>  [] 0s} localhost 62132 <nil> <nil>}
	I1025 16:11:09.035565   12967 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1025 16:11:09.093401   12967 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729897869.010962014
	
	I1025 16:11:09.093409   12967 fix.go:216] guest clock: 1729897869.010962014
	I1025 16:11:09.093413   12967 fix.go:229] Guest: 2024-10-25 16:11:09.010962014 -0700 PDT Remote: 2024-10-25 16:11:09.035423 -0700 PDT m=+0.724683376 (delta=-24.460986ms)
	I1025 16:11:09.093424   12967 fix.go:200] guest clock delta is within tolerance: -24.460986ms
	I1025 16:11:09.093427   12967 start.go:83] releasing machines lock for "running-upgrade-023000", held for 671.409458ms
	I1025 16:11:09.093501   12967 ssh_runner.go:195] Run: cat /version.json
	I1025 16:11:09.093511   12967 sshutil.go:53] new ssh client: &{IP:localhost Port:62132 SSHKeyPath:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/running-upgrade-023000/id_rsa Username:docker}
	I1025 16:11:09.093501   12967 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 16:11:09.093553   12967 sshutil.go:53] new ssh client: &{IP:localhost Port:62132 SSHKeyPath:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/running-upgrade-023000/id_rsa Username:docker}
	W1025 16:11:09.094013   12967 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:62270->127.0.0.1:62132: read: connection reset by peer
	I1025 16:11:09.094028   12967 retry.go:31] will retry after 181.593788ms: ssh: handshake failed: read tcp 127.0.0.1:62270->127.0.0.1:62132: read: connection reset by peer
	W1025 16:11:09.125344   12967 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1025 16:11:09.125387   12967 ssh_runner.go:195] Run: systemctl --version
	I1025 16:11:09.127238   12967 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 16:11:09.129069   12967 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 16:11:09.129097   12967 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1025 16:11:09.132273   12967 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1025 16:11:09.137046   12967 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 16:11:09.137052   12967 start.go:495] detecting cgroup driver to use...
	I1025 16:11:09.137184   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 16:11:09.142648   12967 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1025 16:11:09.145922   12967 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1025 16:11:09.148818   12967 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1025 16:11:09.148848   12967 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1025 16:11:09.152257   12967 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 16:11:09.155789   12967 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1025 16:11:09.158753   12967 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 16:11:09.161916   12967 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 16:11:09.164795   12967 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1025 16:11:09.167764   12967 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1025 16:11:09.171310   12967 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1025 16:11:09.174480   12967 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 16:11:09.177049   12967 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 16:11:09.180257   12967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 16:11:09.275166   12967 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1025 16:11:09.286990   12967 start.go:495] detecting cgroup driver to use...
	I1025 16:11:09.287086   12967 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1025 16:11:09.292665   12967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 16:11:09.297845   12967 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 16:11:09.304054   12967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 16:11:09.308893   12967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 16:11:09.348567   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 16:11:09.354318   12967 ssh_runner.go:195] Run: which cri-dockerd
	I1025 16:11:09.355492   12967 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1025 16:11:09.358056   12967 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1025 16:11:09.362707   12967 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1025 16:11:09.457974   12967 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1025 16:11:09.556463   12967 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1025 16:11:09.556516   12967 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1025 16:11:09.561838   12967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 16:11:09.650513   12967 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 16:11:12.320297   12967 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.669786792s)
	I1025 16:11:12.320382   12967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1025 16:11:12.325322   12967 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1025 16:11:12.331459   12967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1025 16:11:12.336799   12967 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1025 16:11:12.431123   12967 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 16:11:12.503390   12967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 16:11:12.584220   12967 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1025 16:11:12.590917   12967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1025 16:11:12.595579   12967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 16:11:12.679845   12967 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1025 16:11:12.720066   12967 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1025 16:11:12.720152   12967 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1025 16:11:12.722260   12967 start.go:563] Will wait 60s for crictl version
	I1025 16:11:12.722309   12967 ssh_runner.go:195] Run: which crictl
	I1025 16:11:12.723729   12967 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 16:11:12.740759   12967 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1025 16:11:12.740840   12967 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 16:11:12.753437   12967 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 16:11:12.775367   12967 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1025 16:11:12.775451   12967 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1025 16:11:12.776822   12967 kubeadm.go:883] updating cluster {Name:running-upgrade-023000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:62164 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-023000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1025 16:11:12.776864   12967 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1025 16:11:12.776916   12967 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 16:11:12.787254   12967 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 16:11:12.787262   12967 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1025 16:11:12.787321   12967 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1025 16:11:12.790454   12967 ssh_runner.go:195] Run: which lz4
	I1025 16:11:12.791752   12967 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1025 16:11:12.792916   12967 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1025 16:11:12.792932   12967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1025 16:11:13.771712   12967 docker.go:653] duration metric: took 980.010292ms to copy over tarball
	I1025 16:11:13.771790   12967 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1025 16:11:14.955453   12967 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.183654s)
	I1025 16:11:14.955466   12967 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1025 16:11:14.970826   12967 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1025 16:11:14.973654   12967 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1025 16:11:14.978566   12967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 16:11:15.049854   12967 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 16:11:16.226609   12967 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.176748208s)
	I1025 16:11:16.226706   12967 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 16:11:16.237435   12967 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 16:11:16.237444   12967 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1025 16:11:16.237449   12967 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1025 16:11:16.243045   12967 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 16:11:16.244881   12967 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1025 16:11:16.245957   12967 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 16:11:16.246088   12967 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1025 16:11:16.247774   12967 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1025 16:11:16.247790   12967 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1025 16:11:16.248997   12967 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1025 16:11:16.249154   12967 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1025 16:11:16.249966   12967 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1025 16:11:16.250402   12967 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1025 16:11:16.251649   12967 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1025 16:11:16.252190   12967 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1025 16:11:16.252236   12967 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1025 16:11:16.252837   12967 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1025 16:11:16.253774   12967 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1025 16:11:16.254616   12967 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1025 16:11:16.736876   12967 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1025 16:11:16.749106   12967 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1025 16:11:16.749138   12967 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1025 16:11:16.749208   12967 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1025 16:11:16.759078   12967 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1025 16:11:16.774586   12967 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1025 16:11:16.788247   12967 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1025 16:11:16.788272   12967 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1025 16:11:16.788334   12967 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1025 16:11:16.798644   12967 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1025 16:11:16.814275   12967 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1025 16:11:16.825377   12967 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1025 16:11:16.825399   12967 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1025 16:11:16.825464   12967 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	W1025 16:11:16.833915   12967 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1025 16:11:16.834071   12967 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1025 16:11:16.836731   12967 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1025 16:11:16.847600   12967 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1025 16:11:16.847626   12967 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1025 16:11:16.847681   12967 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1025 16:11:16.858190   12967 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1025 16:11:16.858333   12967 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1025 16:11:16.864227   12967 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1025 16:11:16.864238   12967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1025 16:11:16.907819   12967 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1025 16:11:16.909281   12967 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1025 16:11:16.909288   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1025 16:11:16.925317   12967 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1025 16:11:16.925345   12967 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1025 16:11:16.925409   12967 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1025 16:11:16.946539   12967 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1025 16:11:16.966917   12967 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1025 16:11:16.966979   12967 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1025 16:11:16.966995   12967 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1025 16:11:16.967012   12967 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1025 16:11:16.967068   12967 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1025 16:11:16.986837   12967 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1025 16:11:16.986988   12967 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1025 16:11:16.992004   12967 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1025 16:11:16.992032   12967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1025 16:11:16.999608   12967 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1025 16:11:16.999621   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1025 16:11:17.038470   12967 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1025 16:11:17.039959   12967 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1025 16:11:17.052047   12967 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1025 16:11:17.052071   12967 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1025 16:11:17.052147   12967 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1025 16:11:17.077970   12967 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	W1025 16:11:17.170163   12967 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1025 16:11:17.170276   12967 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 16:11:17.186025   12967 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1025 16:11:17.186050   12967 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 16:11:17.186107   12967 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 16:11:17.236739   12967 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1025 16:11:17.236891   12967 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1025 16:11:17.238631   12967 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1025 16:11:17.238659   12967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1025 16:11:17.272667   12967 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1025 16:11:17.272682   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1025 16:11:17.571054   12967 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1025 16:11:17.571093   12967 cache_images.go:92] duration metric: took 1.33364725s to LoadCachedImages
	W1025 16:11:17.571134   12967 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I1025 16:11:17.571140   12967 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1025 16:11:17.571194   12967 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-023000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-023000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 16:11:17.571280   12967 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1025 16:11:17.606075   12967 cni.go:84] Creating CNI manager for ""
	I1025 16:11:17.606087   12967 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 16:11:17.606096   12967 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1025 16:11:17.606105   12967 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-023000 NodeName:running-upgrade-023000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 16:11:17.606182   12967 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-023000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 16:11:17.606258   12967 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1025 16:11:17.609404   12967 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 16:11:17.609442   12967 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 16:11:17.612273   12967 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1025 16:11:17.619302   12967 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 16:11:17.624278   12967 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1025 16:11:17.629299   12967 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1025 16:11:17.630926   12967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 16:11:17.749395   12967 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 16:11:17.754325   12967 certs.go:68] Setting up /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/running-upgrade-023000 for IP: 10.0.2.15
	I1025 16:11:17.754346   12967 certs.go:194] generating shared ca certs ...
	I1025 16:11:17.754355   12967 certs.go:226] acquiring lock for ca certs: {Name:mk87b032e78a00eded37575daed7123f238f6628 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:11:17.754621   12967 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19758-10490/.minikube/ca.key
	I1025 16:11:17.754659   12967 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19758-10490/.minikube/proxy-client-ca.key
	I1025 16:11:17.754667   12967 certs.go:256] generating profile certs ...
	I1025 16:11:17.754728   12967 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/running-upgrade-023000/client.key
	I1025 16:11:17.754742   12967 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/running-upgrade-023000/apiserver.key.a168ae89
	I1025 16:11:17.754752   12967 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/running-upgrade-023000/apiserver.crt.a168ae89 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1025 16:11:17.863342   12967 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/running-upgrade-023000/apiserver.crt.a168ae89 ...
	I1025 16:11:17.863348   12967 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/running-upgrade-023000/apiserver.crt.a168ae89: {Name:mk1a26f786d40e383f9af03e16798e6ffc1fffd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:11:17.863648   12967 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/running-upgrade-023000/apiserver.key.a168ae89 ...
	I1025 16:11:17.863652   12967 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/running-upgrade-023000/apiserver.key.a168ae89: {Name:mk96c8388e7a41df08864cc68b7495fc49ecb3e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:11:17.863810   12967 certs.go:381] copying /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/running-upgrade-023000/apiserver.crt.a168ae89 -> /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/running-upgrade-023000/apiserver.crt
	I1025 16:11:17.863946   12967 certs.go:385] copying /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/running-upgrade-023000/apiserver.key.a168ae89 -> /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/running-upgrade-023000/apiserver.key
	I1025 16:11:17.864072   12967 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/running-upgrade-023000/proxy-client.key
	I1025 16:11:17.864204   12967 certs.go:484] found cert: /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/10998.pem (1338 bytes)
	W1025 16:11:17.864228   12967 certs.go:480] ignoring /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/10998_empty.pem, impossibly tiny 0 bytes
	I1025 16:11:17.864233   12967 certs.go:484] found cert: /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 16:11:17.864252   12967 certs.go:484] found cert: /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem (1078 bytes)
	I1025 16:11:17.864271   12967 certs.go:484] found cert: /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem (1123 bytes)
	I1025 16:11:17.864292   12967 certs.go:484] found cert: /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/key.pem (1675 bytes)
	I1025 16:11:17.864336   12967 certs.go:484] found cert: /Users/jenkins/minikube-integration/19758-10490/.minikube/files/etc/ssl/certs/109982.pem (1708 bytes)
	I1025 16:11:17.864796   12967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 16:11:17.872241   12967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 16:11:17.881292   12967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 16:11:17.889495   12967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 16:11:17.897622   12967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/running-upgrade-023000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1025 16:11:17.905299   12967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/running-upgrade-023000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 16:11:17.919274   12967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/running-upgrade-023000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 16:11:17.926279   12967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/running-upgrade-023000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 16:11:17.933421   12967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 16:11:17.941262   12967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/10998.pem --> /usr/share/ca-certificates/10998.pem (1338 bytes)
	I1025 16:11:17.959804   12967 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/files/etc/ssl/certs/109982.pem --> /usr/share/ca-certificates/109982.pem (1708 bytes)
	I1025 16:11:17.971765   12967 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 16:11:17.976773   12967 ssh_runner.go:195] Run: openssl version
	I1025 16:11:17.978685   12967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 16:11:17.981687   12967 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 16:11:17.983197   12967 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 23:10 /usr/share/ca-certificates/minikubeCA.pem
	I1025 16:11:17.983224   12967 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 16:11:17.985010   12967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 16:11:17.988165   12967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10998.pem && ln -fs /usr/share/ca-certificates/10998.pem /etc/ssl/certs/10998.pem"
	I1025 16:11:17.991867   12967 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10998.pem
	I1025 16:11:17.993317   12967 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 22:58 /usr/share/ca-certificates/10998.pem
	I1025 16:11:17.993346   12967 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10998.pem
	I1025 16:11:17.995212   12967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10998.pem /etc/ssl/certs/51391683.0"
	I1025 16:11:17.998042   12967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109982.pem && ln -fs /usr/share/ca-certificates/109982.pem /etc/ssl/certs/109982.pem"
	I1025 16:11:18.001169   12967 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109982.pem
	I1025 16:11:18.002738   12967 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 22:58 /usr/share/ca-certificates/109982.pem
	I1025 16:11:18.002764   12967 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109982.pem
	I1025 16:11:18.004500   12967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109982.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 16:11:18.007288   12967 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 16:11:18.008771   12967 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 16:11:18.010863   12967 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 16:11:18.012678   12967 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 16:11:18.014441   12967 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 16:11:18.016399   12967 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 16:11:18.018298   12967 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 16:11:18.020105   12967 kubeadm.go:392] StartCluster: {Name:running-upgrade-023000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:62164 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-023000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1025 16:11:18.020179   12967 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 16:11:18.030932   12967 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 16:11:18.034868   12967 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1025 16:11:18.034877   12967 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1025 16:11:18.034911   12967 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 16:11:18.037878   12967 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 16:11:18.037912   12967 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-023000" does not appear in /Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:11:18.037926   12967 kubeconfig.go:62] /Users/jenkins/minikube-integration/19758-10490/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-023000" cluster setting kubeconfig missing "running-upgrade-023000" context setting]
	I1025 16:11:18.038106   12967 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19758-10490/kubeconfig: {Name:mkab4c8ddad2dcb8cd5939090920ae3e3753785d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:11:18.038813   12967 kapi.go:59] client config for running-upgrade-023000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/running-upgrade-023000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/running-upgrade-023000/client.key", CAFile:"/Users/jenkins/minikube-integration/19758-10490/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104cbe510), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 16:11:18.039797   12967 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 16:11:18.042669   12967 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-023000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1025 16:11:18.042674   12967 kubeadm.go:1160] stopping kube-system containers ...
	I1025 16:11:18.042725   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 16:11:18.060027   12967 docker.go:483] Stopping containers: [8b920158411f 7b13ed3e3343 cf10a1d31713 e38bfcb823b1 3ebdfc2c8d0a a799026f87c0 6be0347befd5 f7d087d3ed95 92587cab1da1 5c7628b168ba c0de5082f75b 6b74292eb7ea fe6186ca2bc9 ab66076cbbb4 cd7959456317 9c9f55269df1]
	I1025 16:11:18.060106   12967 ssh_runner.go:195] Run: docker stop 8b920158411f 7b13ed3e3343 cf10a1d31713 e38bfcb823b1 3ebdfc2c8d0a a799026f87c0 6be0347befd5 f7d087d3ed95 92587cab1da1 5c7628b168ba c0de5082f75b 6b74292eb7ea fe6186ca2bc9 ab66076cbbb4 cd7959456317 9c9f55269df1
	I1025 16:11:18.199446   12967 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1025 16:11:18.258925   12967 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 16:11:18.264230   12967 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Oct 25 23:10 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Oct 25 23:10 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Oct 25 23:11 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Oct 25 23:10 /etc/kubernetes/scheduler.conf
	
	I1025 16:11:18.264289   12967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62164 /etc/kubernetes/admin.conf
	I1025 16:11:18.275702   12967 kubeadm.go:163] "https://control-plane.minikube.internal:62164" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:62164 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 16:11:18.275778   12967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 16:11:18.282747   12967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62164 /etc/kubernetes/kubelet.conf
	I1025 16:11:18.293639   12967 kubeadm.go:163] "https://control-plane.minikube.internal:62164" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:62164 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 16:11:18.293693   12967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 16:11:18.299278   12967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62164 /etc/kubernetes/controller-manager.conf
	I1025 16:11:18.303285   12967 kubeadm.go:163] "https://control-plane.minikube.internal:62164" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:62164 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 16:11:18.303321   12967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 16:11:18.306503   12967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62164 /etc/kubernetes/scheduler.conf
	I1025 16:11:18.309305   12967 kubeadm.go:163] "https://control-plane.minikube.internal:62164" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:62164 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 16:11:18.309341   12967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 16:11:18.312317   12967 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 16:11:18.315330   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 16:11:18.348554   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 16:11:19.091057   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1025 16:11:19.285435   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 16:11:19.305666   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1025 16:11:19.325742   12967 api_server.go:52] waiting for apiserver process to appear ...
	I1025 16:11:19.325827   12967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 16:11:19.828207   12967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 16:11:20.327903   12967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 16:11:20.332377   12967 api_server.go:72] duration metric: took 1.006643708s to wait for apiserver process to appear ...
	I1025 16:11:20.332389   12967 api_server.go:88] waiting for apiserver healthz status ...
	I1025 16:11:20.332417   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:11:25.334571   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:11:25.334687   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:11:30.335729   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:11:30.335825   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:11:35.336900   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:11:35.336935   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:11:40.337339   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:11:40.337431   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:11:45.339066   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:11:45.339148   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:11:50.341032   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:11:50.341130   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:11:55.342065   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:11:55.342124   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:12:00.344578   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:12:00.344675   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:12:05.347466   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:12:05.347513   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:12:10.349978   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:12:10.350067   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:12:15.352828   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:12:15.352904   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:12:20.355477   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:12:20.356046   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:12:20.394931   12967 logs.go:282] 2 containers: [051c2bdfeab6 c0de5082f75b]
	I1025 16:12:20.395094   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:12:20.416747   12967 logs.go:282] 2 containers: [256aa04c6fe0 cf10a1d31713]
	I1025 16:12:20.416880   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:12:20.432125   12967 logs.go:282] 1 containers: [f1c00fb6a691]
	I1025 16:12:20.432215   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:12:20.445239   12967 logs.go:282] 2 containers: [3324ee378a14 8b920158411f]
	I1025 16:12:20.445332   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:12:20.456081   12967 logs.go:282] 1 containers: [6126e2846c92]
	I1025 16:12:20.456161   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:12:20.466718   12967 logs.go:282] 2 containers: [23bd841a0a0f f7d087d3ed95]
	I1025 16:12:20.466802   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:12:20.476908   12967 logs.go:282] 0 containers: []
	W1025 16:12:20.476920   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:12:20.476991   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:12:20.487346   12967 logs.go:282] 2 containers: [4793573f620f d638f46e9df9]
	I1025 16:12:20.487371   12967 logs.go:123] Gathering logs for kube-controller-manager [f7d087d3ed95] ...
	I1025 16:12:20.487376   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7d087d3ed95"
	I1025 16:12:20.499501   12967 logs.go:123] Gathering logs for storage-provisioner [d638f46e9df9] ...
	I1025 16:12:20.499511   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d638f46e9df9"
	I1025 16:12:20.512625   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:12:20.512644   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:12:20.526059   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:12:20.526072   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:12:20.601258   12967 logs.go:123] Gathering logs for etcd [256aa04c6fe0] ...
	I1025 16:12:20.601271   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 256aa04c6fe0"
	I1025 16:12:20.615378   12967 logs.go:123] Gathering logs for etcd [cf10a1d31713] ...
	I1025 16:12:20.615390   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10a1d31713"
	I1025 16:12:20.630022   12967 logs.go:123] Gathering logs for kube-proxy [6126e2846c92] ...
	I1025 16:12:20.630031   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6126e2846c92"
	I1025 16:12:20.641927   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:12:20.641938   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:12:20.669828   12967 logs.go:123] Gathering logs for kube-apiserver [c0de5082f75b] ...
	I1025 16:12:20.669838   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0de5082f75b"
	I1025 16:12:20.697522   12967 logs.go:123] Gathering logs for kube-scheduler [3324ee378a14] ...
	I1025 16:12:20.697535   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3324ee378a14"
	I1025 16:12:20.709520   12967 logs.go:123] Gathering logs for kube-scheduler [8b920158411f] ...
	I1025 16:12:20.709530   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b920158411f"
	I1025 16:12:20.720591   12967 logs.go:123] Gathering logs for kube-controller-manager [23bd841a0a0f] ...
	I1025 16:12:20.720603   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bd841a0a0f"
	I1025 16:12:20.738012   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:12:20.738022   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:12:20.780410   12967 logs.go:123] Gathering logs for coredns [f1c00fb6a691] ...
	I1025 16:12:20.780420   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c00fb6a691"
	I1025 16:12:20.791686   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:12:20.791698   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:12:20.796064   12967 logs.go:123] Gathering logs for kube-apiserver [051c2bdfeab6] ...
	I1025 16:12:20.796074   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 051c2bdfeab6"
	I1025 16:12:20.816582   12967 logs.go:123] Gathering logs for storage-provisioner [4793573f620f] ...
	I1025 16:12:20.816594   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4793573f620f"
	I1025 16:12:23.330605   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:12:28.333055   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:12:28.333542   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:12:28.380160   12967 logs.go:282] 2 containers: [051c2bdfeab6 c0de5082f75b]
	I1025 16:12:28.380322   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:12:28.401986   12967 logs.go:282] 2 containers: [256aa04c6fe0 cf10a1d31713]
	I1025 16:12:28.402114   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:12:28.417242   12967 logs.go:282] 1 containers: [f1c00fb6a691]
	I1025 16:12:28.417337   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:12:28.429981   12967 logs.go:282] 2 containers: [3324ee378a14 8b920158411f]
	I1025 16:12:28.430057   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:12:28.440743   12967 logs.go:282] 1 containers: [6126e2846c92]
	I1025 16:12:28.440816   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:12:28.451321   12967 logs.go:282] 2 containers: [23bd841a0a0f f7d087d3ed95]
	I1025 16:12:28.451392   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:12:28.461398   12967 logs.go:282] 0 containers: []
	W1025 16:12:28.461410   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:12:28.461472   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:12:28.471775   12967 logs.go:282] 2 containers: [4793573f620f d638f46e9df9]
	I1025 16:12:28.471793   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:12:28.471798   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:12:28.476171   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:12:28.476177   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:12:28.514075   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:12:28.514089   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:12:28.539153   12967 logs.go:123] Gathering logs for kube-controller-manager [23bd841a0a0f] ...
	I1025 16:12:28.539162   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bd841a0a0f"
	I1025 16:12:28.557238   12967 logs.go:123] Gathering logs for kube-controller-manager [f7d087d3ed95] ...
	I1025 16:12:28.557248   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7d087d3ed95"
	I1025 16:12:28.569397   12967 logs.go:123] Gathering logs for storage-provisioner [d638f46e9df9] ...
	I1025 16:12:28.569412   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d638f46e9df9"
	I1025 16:12:28.580573   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:12:28.580583   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:12:28.622569   12967 logs.go:123] Gathering logs for kube-apiserver [051c2bdfeab6] ...
	I1025 16:12:28.622578   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 051c2bdfeab6"
	I1025 16:12:28.646467   12967 logs.go:123] Gathering logs for etcd [cf10a1d31713] ...
	I1025 16:12:28.646476   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10a1d31713"
	I1025 16:12:28.659912   12967 logs.go:123] Gathering logs for kube-scheduler [8b920158411f] ...
	I1025 16:12:28.659925   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b920158411f"
	I1025 16:12:28.671750   12967 logs.go:123] Gathering logs for kube-proxy [6126e2846c92] ...
	I1025 16:12:28.671760   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6126e2846c92"
	I1025 16:12:28.683336   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:12:28.683347   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:12:28.695868   12967 logs.go:123] Gathering logs for etcd [256aa04c6fe0] ...
	I1025 16:12:28.695880   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 256aa04c6fe0"
	I1025 16:12:28.710135   12967 logs.go:123] Gathering logs for kube-apiserver [c0de5082f75b] ...
	I1025 16:12:28.710148   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0de5082f75b"
	I1025 16:12:28.738544   12967 logs.go:123] Gathering logs for coredns [f1c00fb6a691] ...
	I1025 16:12:28.738553   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c00fb6a691"
	I1025 16:12:28.749920   12967 logs.go:123] Gathering logs for kube-scheduler [3324ee378a14] ...
	I1025 16:12:28.749932   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3324ee378a14"
	I1025 16:12:28.761909   12967 logs.go:123] Gathering logs for storage-provisioner [4793573f620f] ...
	I1025 16:12:28.761920   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4793573f620f"
	I1025 16:12:31.276524   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:12:36.279411   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:12:36.280038   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:12:36.319626   12967 logs.go:282] 2 containers: [051c2bdfeab6 c0de5082f75b]
	I1025 16:12:36.319786   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:12:36.342465   12967 logs.go:282] 2 containers: [256aa04c6fe0 cf10a1d31713]
	I1025 16:12:36.342594   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:12:36.357519   12967 logs.go:282] 1 containers: [f1c00fb6a691]
	I1025 16:12:36.357608   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:12:36.377091   12967 logs.go:282] 2 containers: [3324ee378a14 8b920158411f]
	I1025 16:12:36.377173   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:12:36.387608   12967 logs.go:282] 1 containers: [6126e2846c92]
	I1025 16:12:36.387701   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:12:36.398730   12967 logs.go:282] 2 containers: [23bd841a0a0f f7d087d3ed95]
	I1025 16:12:36.398810   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:12:36.408889   12967 logs.go:282] 0 containers: []
	W1025 16:12:36.408899   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:12:36.408960   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:12:36.420806   12967 logs.go:282] 2 containers: [4793573f620f d638f46e9df9]
	I1025 16:12:36.420824   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:12:36.420829   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:12:36.425427   12967 logs.go:123] Gathering logs for etcd [cf10a1d31713] ...
	I1025 16:12:36.425435   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10a1d31713"
	I1025 16:12:36.444460   12967 logs.go:123] Gathering logs for kube-controller-manager [f7d087d3ed95] ...
	I1025 16:12:36.444472   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7d087d3ed95"
	I1025 16:12:36.456279   12967 logs.go:123] Gathering logs for coredns [f1c00fb6a691] ...
	I1025 16:12:36.456291   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c00fb6a691"
	I1025 16:12:36.467397   12967 logs.go:123] Gathering logs for kube-scheduler [8b920158411f] ...
	I1025 16:12:36.467406   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b920158411f"
	I1025 16:12:36.478729   12967 logs.go:123] Gathering logs for storage-provisioner [d638f46e9df9] ...
	I1025 16:12:36.478741   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d638f46e9df9"
	I1025 16:12:36.490268   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:12:36.490282   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:12:36.525371   12967 logs.go:123] Gathering logs for kube-apiserver [051c2bdfeab6] ...
	I1025 16:12:36.525381   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 051c2bdfeab6"
	I1025 16:12:36.539510   12967 logs.go:123] Gathering logs for etcd [256aa04c6fe0] ...
	I1025 16:12:36.539521   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 256aa04c6fe0"
	I1025 16:12:36.553655   12967 logs.go:123] Gathering logs for storage-provisioner [4793573f620f] ...
	I1025 16:12:36.553667   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4793573f620f"
	I1025 16:12:36.565387   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:12:36.565400   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:12:36.606345   12967 logs.go:123] Gathering logs for kube-scheduler [3324ee378a14] ...
	I1025 16:12:36.606355   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3324ee378a14"
	I1025 16:12:36.617881   12967 logs.go:123] Gathering logs for kube-proxy [6126e2846c92] ...
	I1025 16:12:36.617892   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6126e2846c92"
	I1025 16:12:36.629381   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:12:36.629393   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:12:36.641844   12967 logs.go:123] Gathering logs for kube-apiserver [c0de5082f75b] ...
	I1025 16:12:36.641857   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0de5082f75b"
	I1025 16:12:36.667939   12967 logs.go:123] Gathering logs for kube-controller-manager [23bd841a0a0f] ...
	I1025 16:12:36.667950   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bd841a0a0f"
	I1025 16:12:36.685938   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:12:36.685950   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:12:39.215110   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:12:44.217926   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:12:44.218121   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:12:44.250641   12967 logs.go:282] 2 containers: [051c2bdfeab6 c0de5082f75b]
	I1025 16:12:44.250731   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:12:44.263185   12967 logs.go:282] 2 containers: [256aa04c6fe0 cf10a1d31713]
	I1025 16:12:44.263261   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:12:44.274525   12967 logs.go:282] 1 containers: [f1c00fb6a691]
	I1025 16:12:44.274616   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:12:44.286954   12967 logs.go:282] 2 containers: [3324ee378a14 8b920158411f]
	I1025 16:12:44.287051   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:12:44.300735   12967 logs.go:282] 1 containers: [6126e2846c92]
	I1025 16:12:44.300817   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:12:44.312411   12967 logs.go:282] 2 containers: [23bd841a0a0f f7d087d3ed95]
	I1025 16:12:44.312490   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:12:44.322599   12967 logs.go:282] 0 containers: []
	W1025 16:12:44.322610   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:12:44.322674   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:12:44.337144   12967 logs.go:282] 2 containers: [4793573f620f d638f46e9df9]
	I1025 16:12:44.337161   12967 logs.go:123] Gathering logs for kube-proxy [6126e2846c92] ...
	I1025 16:12:44.337166   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6126e2846c92"
	I1025 16:12:44.348403   12967 logs.go:123] Gathering logs for kube-controller-manager [23bd841a0a0f] ...
	I1025 16:12:44.348415   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bd841a0a0f"
	I1025 16:12:44.365689   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:12:44.365701   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:12:44.377742   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:12:44.377752   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:12:44.382568   12967 logs.go:123] Gathering logs for coredns [f1c00fb6a691] ...
	I1025 16:12:44.382578   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c00fb6a691"
	I1025 16:12:44.394025   12967 logs.go:123] Gathering logs for etcd [256aa04c6fe0] ...
	I1025 16:12:44.394036   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 256aa04c6fe0"
	I1025 16:12:44.407728   12967 logs.go:123] Gathering logs for storage-provisioner [d638f46e9df9] ...
	I1025 16:12:44.407742   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d638f46e9df9"
	I1025 16:12:44.419378   12967 logs.go:123] Gathering logs for kube-apiserver [051c2bdfeab6] ...
	I1025 16:12:44.419388   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 051c2bdfeab6"
	I1025 16:12:44.432756   12967 logs.go:123] Gathering logs for kube-apiserver [c0de5082f75b] ...
	I1025 16:12:44.432765   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0de5082f75b"
	I1025 16:12:44.458919   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:12:44.458931   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:12:44.496104   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:12:44.496116   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:12:44.521104   12967 logs.go:123] Gathering logs for kube-scheduler [3324ee378a14] ...
	I1025 16:12:44.521118   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3324ee378a14"
	I1025 16:12:44.532780   12967 logs.go:123] Gathering logs for kube-scheduler [8b920158411f] ...
	I1025 16:12:44.532793   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b920158411f"
	I1025 16:12:44.544297   12967 logs.go:123] Gathering logs for kube-controller-manager [f7d087d3ed95] ...
	I1025 16:12:44.544307   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7d087d3ed95"
	I1025 16:12:44.557204   12967 logs.go:123] Gathering logs for storage-provisioner [4793573f620f] ...
	I1025 16:12:44.557217   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4793573f620f"
	I1025 16:12:44.568530   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:12:44.568543   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:12:44.610210   12967 logs.go:123] Gathering logs for etcd [cf10a1d31713] ...
	I1025 16:12:44.610217   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10a1d31713"
	I1025 16:12:47.125437   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:12:52.128165   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:12:52.128734   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:12:52.168235   12967 logs.go:282] 2 containers: [051c2bdfeab6 c0de5082f75b]
	I1025 16:12:52.168396   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:12:52.190077   12967 logs.go:282] 2 containers: [256aa04c6fe0 cf10a1d31713]
	I1025 16:12:52.190209   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:12:52.205621   12967 logs.go:282] 1 containers: [f1c00fb6a691]
	I1025 16:12:52.205706   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:12:52.217872   12967 logs.go:282] 2 containers: [3324ee378a14 8b920158411f]
	I1025 16:12:52.217947   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:12:52.229647   12967 logs.go:282] 1 containers: [6126e2846c92]
	I1025 16:12:52.229718   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:12:52.242723   12967 logs.go:282] 2 containers: [23bd841a0a0f f7d087d3ed95]
	I1025 16:12:52.242832   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:12:52.253011   12967 logs.go:282] 0 containers: []
	W1025 16:12:52.253022   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:12:52.253090   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:12:52.263750   12967 logs.go:282] 2 containers: [4793573f620f d638f46e9df9]
	I1025 16:12:52.263765   12967 logs.go:123] Gathering logs for coredns [f1c00fb6a691] ...
	I1025 16:12:52.263770   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c00fb6a691"
	I1025 16:12:52.275388   12967 logs.go:123] Gathering logs for kube-scheduler [8b920158411f] ...
	I1025 16:12:52.275398   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b920158411f"
	I1025 16:12:52.286466   12967 logs.go:123] Gathering logs for kube-proxy [6126e2846c92] ...
	I1025 16:12:52.286477   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6126e2846c92"
	I1025 16:12:52.298215   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:12:52.298228   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:12:52.338497   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:12:52.338508   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:12:52.342731   12967 logs.go:123] Gathering logs for kube-apiserver [c0de5082f75b] ...
	I1025 16:12:52.342739   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0de5082f75b"
	I1025 16:12:52.368036   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:12:52.368050   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:12:52.380378   12967 logs.go:123] Gathering logs for kube-apiserver [051c2bdfeab6] ...
	I1025 16:12:52.380390   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 051c2bdfeab6"
	I1025 16:12:52.394857   12967 logs.go:123] Gathering logs for etcd [cf10a1d31713] ...
	I1025 16:12:52.394867   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10a1d31713"
	I1025 16:12:52.412259   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:12:52.412268   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:12:52.438388   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:12:52.438397   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:12:52.472366   12967 logs.go:123] Gathering logs for etcd [256aa04c6fe0] ...
	I1025 16:12:52.472376   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 256aa04c6fe0"
	I1025 16:12:52.486227   12967 logs.go:123] Gathering logs for kube-scheduler [3324ee378a14] ...
	I1025 16:12:52.486238   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3324ee378a14"
	I1025 16:12:52.497949   12967 logs.go:123] Gathering logs for kube-controller-manager [23bd841a0a0f] ...
	I1025 16:12:52.497959   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bd841a0a0f"
	I1025 16:12:52.519512   12967 logs.go:123] Gathering logs for kube-controller-manager [f7d087d3ed95] ...
	I1025 16:12:52.519525   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7d087d3ed95"
	I1025 16:12:52.531232   12967 logs.go:123] Gathering logs for storage-provisioner [4793573f620f] ...
	I1025 16:12:52.531246   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4793573f620f"
	I1025 16:12:52.546323   12967 logs.go:123] Gathering logs for storage-provisioner [d638f46e9df9] ...
	I1025 16:12:52.546335   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d638f46e9df9"
	I1025 16:12:55.060236   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:13:00.062972   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:13:00.063474   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:13:00.102697   12967 logs.go:282] 2 containers: [051c2bdfeab6 c0de5082f75b]
	I1025 16:13:00.102889   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:13:00.124402   12967 logs.go:282] 2 containers: [256aa04c6fe0 cf10a1d31713]
	I1025 16:13:00.124541   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:13:00.140206   12967 logs.go:282] 1 containers: [f1c00fb6a691]
	I1025 16:13:00.140303   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:13:00.152884   12967 logs.go:282] 2 containers: [3324ee378a14 8b920158411f]
	I1025 16:13:00.152970   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:13:00.164007   12967 logs.go:282] 1 containers: [6126e2846c92]
	I1025 16:13:00.164084   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:13:00.174612   12967 logs.go:282] 2 containers: [23bd841a0a0f f7d087d3ed95]
	I1025 16:13:00.174688   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:13:00.189537   12967 logs.go:282] 0 containers: []
	W1025 16:13:00.189547   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:13:00.189603   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:13:00.200430   12967 logs.go:282] 2 containers: [4793573f620f d638f46e9df9]
	I1025 16:13:00.200449   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:13:00.200454   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:13:00.242683   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:13:00.242691   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:13:00.246706   12967 logs.go:123] Gathering logs for etcd [256aa04c6fe0] ...
	I1025 16:13:00.246714   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 256aa04c6fe0"
	I1025 16:13:00.260314   12967 logs.go:123] Gathering logs for kube-scheduler [8b920158411f] ...
	I1025 16:13:00.260326   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b920158411f"
	I1025 16:13:00.271482   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:13:00.271492   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:13:00.282995   12967 logs.go:123] Gathering logs for kube-scheduler [3324ee378a14] ...
	I1025 16:13:00.283008   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3324ee378a14"
	I1025 16:13:00.295127   12967 logs.go:123] Gathering logs for kube-controller-manager [23bd841a0a0f] ...
	I1025 16:13:00.295139   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bd841a0a0f"
	I1025 16:13:00.312197   12967 logs.go:123] Gathering logs for kube-controller-manager [f7d087d3ed95] ...
	I1025 16:13:00.312206   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7d087d3ed95"
	I1025 16:13:00.323488   12967 logs.go:123] Gathering logs for storage-provisioner [d638f46e9df9] ...
	I1025 16:13:00.323499   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d638f46e9df9"
	I1025 16:13:00.334661   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:13:00.334670   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:13:00.368960   12967 logs.go:123] Gathering logs for coredns [f1c00fb6a691] ...
	I1025 16:13:00.368970   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c00fb6a691"
	I1025 16:13:00.381837   12967 logs.go:123] Gathering logs for kube-proxy [6126e2846c92] ...
	I1025 16:13:00.381848   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6126e2846c92"
	I1025 16:13:00.393735   12967 logs.go:123] Gathering logs for storage-provisioner [4793573f620f] ...
	I1025 16:13:00.393745   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4793573f620f"
	I1025 16:13:00.405384   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:13:00.405394   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:13:00.432171   12967 logs.go:123] Gathering logs for kube-apiserver [051c2bdfeab6] ...
	I1025 16:13:00.432181   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 051c2bdfeab6"
	I1025 16:13:00.445977   12967 logs.go:123] Gathering logs for kube-apiserver [c0de5082f75b] ...
	I1025 16:13:00.445989   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0de5082f75b"
	I1025 16:13:00.470442   12967 logs.go:123] Gathering logs for etcd [cf10a1d31713] ...
	I1025 16:13:00.470455   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10a1d31713"
	I1025 16:13:02.985929   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:13:07.988829   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:13:07.989428   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:13:08.030854   12967 logs.go:282] 2 containers: [051c2bdfeab6 c0de5082f75b]
	I1025 16:13:08.031017   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:13:08.053542   12967 logs.go:282] 2 containers: [256aa04c6fe0 cf10a1d31713]
	I1025 16:13:08.053668   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:13:08.069106   12967 logs.go:282] 1 containers: [f1c00fb6a691]
	I1025 16:13:08.069184   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:13:08.082347   12967 logs.go:282] 2 containers: [3324ee378a14 8b920158411f]
	I1025 16:13:08.082438   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:13:08.093484   12967 logs.go:282] 1 containers: [6126e2846c92]
	I1025 16:13:08.093570   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:13:08.105293   12967 logs.go:282] 2 containers: [23bd841a0a0f f7d087d3ed95]
	I1025 16:13:08.105367   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:13:08.115796   12967 logs.go:282] 0 containers: []
	W1025 16:13:08.115808   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:13:08.115876   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:13:08.126493   12967 logs.go:282] 2 containers: [4793573f620f d638f46e9df9]
	I1025 16:13:08.126510   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:13:08.126515   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:13:08.131162   12967 logs.go:123] Gathering logs for kube-apiserver [051c2bdfeab6] ...
	I1025 16:13:08.131172   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 051c2bdfeab6"
	I1025 16:13:08.145428   12967 logs.go:123] Gathering logs for kube-scheduler [3324ee378a14] ...
	I1025 16:13:08.145441   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3324ee378a14"
	I1025 16:13:08.156968   12967 logs.go:123] Gathering logs for kube-scheduler [8b920158411f] ...
	I1025 16:13:08.156977   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b920158411f"
	I1025 16:13:08.168593   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:13:08.168605   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:13:08.211083   12967 logs.go:123] Gathering logs for etcd [cf10a1d31713] ...
	I1025 16:13:08.211093   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10a1d31713"
	I1025 16:13:08.224776   12967 logs.go:123] Gathering logs for kube-proxy [6126e2846c92] ...
	I1025 16:13:08.224787   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6126e2846c92"
	I1025 16:13:08.236965   12967 logs.go:123] Gathering logs for kube-controller-manager [23bd841a0a0f] ...
	I1025 16:13:08.236978   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bd841a0a0f"
	I1025 16:13:08.254958   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:13:08.254967   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:13:08.266483   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:13:08.266493   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:13:08.300476   12967 logs.go:123] Gathering logs for coredns [f1c00fb6a691] ...
	I1025 16:13:08.300485   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c00fb6a691"
	I1025 16:13:08.312082   12967 logs.go:123] Gathering logs for kube-controller-manager [f7d087d3ed95] ...
	I1025 16:13:08.312095   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7d087d3ed95"
	I1025 16:13:08.323339   12967 logs.go:123] Gathering logs for kube-apiserver [c0de5082f75b] ...
	I1025 16:13:08.323350   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0de5082f75b"
	I1025 16:13:08.348038   12967 logs.go:123] Gathering logs for etcd [256aa04c6fe0] ...
	I1025 16:13:08.348047   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 256aa04c6fe0"
	I1025 16:13:08.363251   12967 logs.go:123] Gathering logs for storage-provisioner [4793573f620f] ...
	I1025 16:13:08.363263   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4793573f620f"
	I1025 16:13:08.375176   12967 logs.go:123] Gathering logs for storage-provisioner [d638f46e9df9] ...
	I1025 16:13:08.375190   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d638f46e9df9"
	I1025 16:13:08.386426   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:13:08.386436   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:13:10.913236   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:13:15.914100   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:13:15.914221   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:13:15.926024   12967 logs.go:282] 2 containers: [051c2bdfeab6 c0de5082f75b]
	I1025 16:13:15.926115   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:13:15.936839   12967 logs.go:282] 2 containers: [256aa04c6fe0 cf10a1d31713]
	I1025 16:13:15.936913   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:13:15.947551   12967 logs.go:282] 1 containers: [f1c00fb6a691]
	I1025 16:13:15.947631   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:13:15.957823   12967 logs.go:282] 2 containers: [3324ee378a14 8b920158411f]
	I1025 16:13:15.957892   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:13:15.968446   12967 logs.go:282] 1 containers: [6126e2846c92]
	I1025 16:13:15.968524   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:13:15.980327   12967 logs.go:282] 2 containers: [23bd841a0a0f f7d087d3ed95]
	I1025 16:13:15.980405   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:13:15.991709   12967 logs.go:282] 0 containers: []
	W1025 16:13:15.991720   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:13:15.991789   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:13:16.002842   12967 logs.go:282] 2 containers: [4793573f620f d638f46e9df9]
	I1025 16:13:16.002858   12967 logs.go:123] Gathering logs for kube-controller-manager [f7d087d3ed95] ...
	I1025 16:13:16.002864   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7d087d3ed95"
	I1025 16:13:16.014307   12967 logs.go:123] Gathering logs for storage-provisioner [4793573f620f] ...
	I1025 16:13:16.014321   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4793573f620f"
	I1025 16:13:16.025635   12967 logs.go:123] Gathering logs for storage-provisioner [d638f46e9df9] ...
	I1025 16:13:16.025647   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d638f46e9df9"
	I1025 16:13:16.040394   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:13:16.040405   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:13:16.064320   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:13:16.064328   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:13:16.068391   12967 logs.go:123] Gathering logs for kube-apiserver [051c2bdfeab6] ...
	I1025 16:13:16.068399   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 051c2bdfeab6"
	I1025 16:13:16.083207   12967 logs.go:123] Gathering logs for kube-apiserver [c0de5082f75b] ...
	I1025 16:13:16.083222   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0de5082f75b"
	I1025 16:13:16.107875   12967 logs.go:123] Gathering logs for coredns [f1c00fb6a691] ...
	I1025 16:13:16.107889   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c00fb6a691"
	I1025 16:13:16.119164   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:13:16.119174   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:13:16.158307   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:13:16.158315   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:13:16.198584   12967 logs.go:123] Gathering logs for kube-scheduler [8b920158411f] ...
	I1025 16:13:16.198595   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b920158411f"
	I1025 16:13:16.209858   12967 logs.go:123] Gathering logs for kube-proxy [6126e2846c92] ...
	I1025 16:13:16.209869   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6126e2846c92"
	I1025 16:13:16.221629   12967 logs.go:123] Gathering logs for kube-controller-manager [23bd841a0a0f] ...
	I1025 16:13:16.221639   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bd841a0a0f"
	I1025 16:13:16.238979   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:13:16.238988   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:13:16.250973   12967 logs.go:123] Gathering logs for etcd [256aa04c6fe0] ...
	I1025 16:13:16.250982   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 256aa04c6fe0"
	I1025 16:13:16.265366   12967 logs.go:123] Gathering logs for etcd [cf10a1d31713] ...
	I1025 16:13:16.265376   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10a1d31713"
	I1025 16:13:16.278342   12967 logs.go:123] Gathering logs for kube-scheduler [3324ee378a14] ...
	I1025 16:13:16.278353   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3324ee378a14"
	I1025 16:13:18.795092   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:13:23.797674   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:13:23.798216   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:13:23.838916   12967 logs.go:282] 2 containers: [051c2bdfeab6 c0de5082f75b]
	I1025 16:13:23.839060   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:13:23.858115   12967 logs.go:282] 2 containers: [256aa04c6fe0 cf10a1d31713]
	I1025 16:13:23.858228   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:13:23.873151   12967 logs.go:282] 1 containers: [f1c00fb6a691]
	I1025 16:13:23.873252   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:13:23.885149   12967 logs.go:282] 2 containers: [3324ee378a14 8b920158411f]
	I1025 16:13:23.885231   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:13:23.895921   12967 logs.go:282] 1 containers: [6126e2846c92]
	I1025 16:13:23.895992   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:13:23.906603   12967 logs.go:282] 2 containers: [23bd841a0a0f f7d087d3ed95]
	I1025 16:13:23.906678   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:13:23.917192   12967 logs.go:282] 0 containers: []
	W1025 16:13:23.917205   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:13:23.917268   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:13:23.927623   12967 logs.go:282] 2 containers: [4793573f620f d638f46e9df9]
	I1025 16:13:23.927641   12967 logs.go:123] Gathering logs for kube-scheduler [3324ee378a14] ...
	I1025 16:13:23.927646   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3324ee378a14"
	I1025 16:13:23.939313   12967 logs.go:123] Gathering logs for kube-scheduler [8b920158411f] ...
	I1025 16:13:23.939325   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b920158411f"
	I1025 16:13:23.950327   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:13:23.950337   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:13:23.962317   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:13:23.962331   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:13:23.966987   12967 logs.go:123] Gathering logs for kube-apiserver [051c2bdfeab6] ...
	I1025 16:13:23.966993   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 051c2bdfeab6"
	I1025 16:13:23.981753   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:13:23.981765   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:13:24.007006   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:13:24.007016   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:13:24.048408   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:13:24.048417   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:13:24.082395   12967 logs.go:123] Gathering logs for etcd [cf10a1d31713] ...
	I1025 16:13:24.082407   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10a1d31713"
	I1025 16:13:24.097162   12967 logs.go:123] Gathering logs for coredns [f1c00fb6a691] ...
	I1025 16:13:24.097171   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c00fb6a691"
	I1025 16:13:24.115304   12967 logs.go:123] Gathering logs for kube-controller-manager [23bd841a0a0f] ...
	I1025 16:13:24.115314   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bd841a0a0f"
	I1025 16:13:24.134612   12967 logs.go:123] Gathering logs for storage-provisioner [4793573f620f] ...
	I1025 16:13:24.134623   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4793573f620f"
	I1025 16:13:24.147394   12967 logs.go:123] Gathering logs for kube-apiserver [c0de5082f75b] ...
	I1025 16:13:24.147405   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0de5082f75b"
	I1025 16:13:24.172334   12967 logs.go:123] Gathering logs for etcd [256aa04c6fe0] ...
	I1025 16:13:24.172347   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 256aa04c6fe0"
	I1025 16:13:24.194583   12967 logs.go:123] Gathering logs for kube-proxy [6126e2846c92] ...
	I1025 16:13:24.194593   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6126e2846c92"
	I1025 16:13:24.206790   12967 logs.go:123] Gathering logs for kube-controller-manager [f7d087d3ed95] ...
	I1025 16:13:24.206802   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7d087d3ed95"
	I1025 16:13:24.218377   12967 logs.go:123] Gathering logs for storage-provisioner [d638f46e9df9] ...
	I1025 16:13:24.218390   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d638f46e9df9"
	I1025 16:13:26.732111   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:13:31.734834   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:13:31.735432   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:13:31.779171   12967 logs.go:282] 2 containers: [051c2bdfeab6 c0de5082f75b]
	I1025 16:13:31.779327   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:13:31.798262   12967 logs.go:282] 2 containers: [256aa04c6fe0 cf10a1d31713]
	I1025 16:13:31.798367   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:13:31.812614   12967 logs.go:282] 1 containers: [f1c00fb6a691]
	I1025 16:13:31.812705   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:13:31.824524   12967 logs.go:282] 2 containers: [3324ee378a14 8b920158411f]
	I1025 16:13:31.824602   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:13:31.834925   12967 logs.go:282] 1 containers: [6126e2846c92]
	I1025 16:13:31.834997   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:13:31.845249   12967 logs.go:282] 2 containers: [23bd841a0a0f f7d087d3ed95]
	I1025 16:13:31.845329   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:13:31.855498   12967 logs.go:282] 0 containers: []
	W1025 16:13:31.855510   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:13:31.855579   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:13:31.869144   12967 logs.go:282] 2 containers: [4793573f620f d638f46e9df9]
	I1025 16:13:31.869162   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:13:31.869167   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:13:31.908173   12967 logs.go:123] Gathering logs for kube-apiserver [051c2bdfeab6] ...
	I1025 16:13:31.908185   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 051c2bdfeab6"
	I1025 16:13:31.929263   12967 logs.go:123] Gathering logs for etcd [256aa04c6fe0] ...
	I1025 16:13:31.929272   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 256aa04c6fe0"
	I1025 16:13:31.943413   12967 logs.go:123] Gathering logs for kube-scheduler [3324ee378a14] ...
	I1025 16:13:31.943425   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3324ee378a14"
	I1025 16:13:31.955128   12967 logs.go:123] Gathering logs for kube-scheduler [8b920158411f] ...
	I1025 16:13:31.955140   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b920158411f"
	I1025 16:13:31.967479   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:13:31.967493   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:13:32.008917   12967 logs.go:123] Gathering logs for coredns [f1c00fb6a691] ...
	I1025 16:13:32.008927   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c00fb6a691"
	I1025 16:13:32.020118   12967 logs.go:123] Gathering logs for storage-provisioner [4793573f620f] ...
	I1025 16:13:32.020128   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4793573f620f"
	I1025 16:13:32.031480   12967 logs.go:123] Gathering logs for storage-provisioner [d638f46e9df9] ...
	I1025 16:13:32.031489   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d638f46e9df9"
	I1025 16:13:32.042629   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:13:32.042641   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:13:32.066866   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:13:32.066873   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:13:32.071338   12967 logs.go:123] Gathering logs for kube-apiserver [c0de5082f75b] ...
	I1025 16:13:32.071346   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0de5082f75b"
	I1025 16:13:32.095833   12967 logs.go:123] Gathering logs for kube-controller-manager [f7d087d3ed95] ...
	I1025 16:13:32.095845   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7d087d3ed95"
	I1025 16:13:32.107730   12967 logs.go:123] Gathering logs for etcd [cf10a1d31713] ...
	I1025 16:13:32.107742   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10a1d31713"
	I1025 16:13:32.121369   12967 logs.go:123] Gathering logs for kube-proxy [6126e2846c92] ...
	I1025 16:13:32.121382   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6126e2846c92"
	I1025 16:13:32.133226   12967 logs.go:123] Gathering logs for kube-controller-manager [23bd841a0a0f] ...
	I1025 16:13:32.133238   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bd841a0a0f"
	I1025 16:13:32.151634   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:13:32.151642   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:13:34.665237   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:13:39.667639   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:13:39.668145   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:13:39.701094   12967 logs.go:282] 2 containers: [051c2bdfeab6 c0de5082f75b]
	I1025 16:13:39.701244   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:13:39.720664   12967 logs.go:282] 2 containers: [256aa04c6fe0 cf10a1d31713]
	I1025 16:13:39.720779   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:13:39.734103   12967 logs.go:282] 1 containers: [f1c00fb6a691]
	I1025 16:13:39.734183   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:13:39.745546   12967 logs.go:282] 2 containers: [3324ee378a14 8b920158411f]
	I1025 16:13:39.745626   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:13:39.757558   12967 logs.go:282] 1 containers: [6126e2846c92]
	I1025 16:13:39.757635   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:13:39.768477   12967 logs.go:282] 2 containers: [23bd841a0a0f f7d087d3ed95]
	I1025 16:13:39.768555   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:13:39.778723   12967 logs.go:282] 0 containers: []
	W1025 16:13:39.778739   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:13:39.778810   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:13:39.792697   12967 logs.go:282] 2 containers: [4793573f620f d638f46e9df9]
	I1025 16:13:39.792718   12967 logs.go:123] Gathering logs for kube-scheduler [3324ee378a14] ...
	I1025 16:13:39.792723   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3324ee378a14"
	I1025 16:13:39.804852   12967 logs.go:123] Gathering logs for kube-proxy [6126e2846c92] ...
	I1025 16:13:39.804862   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6126e2846c92"
	I1025 16:13:39.816256   12967 logs.go:123] Gathering logs for kube-controller-manager [23bd841a0a0f] ...
	I1025 16:13:39.816268   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bd841a0a0f"
	I1025 16:13:39.832969   12967 logs.go:123] Gathering logs for etcd [256aa04c6fe0] ...
	I1025 16:13:39.832979   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 256aa04c6fe0"
	I1025 16:13:39.850991   12967 logs.go:123] Gathering logs for etcd [cf10a1d31713] ...
	I1025 16:13:39.851000   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10a1d31713"
	I1025 16:13:39.864477   12967 logs.go:123] Gathering logs for kube-controller-manager [f7d087d3ed95] ...
	I1025 16:13:39.864490   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7d087d3ed95"
	I1025 16:13:39.875807   12967 logs.go:123] Gathering logs for storage-provisioner [4793573f620f] ...
	I1025 16:13:39.875819   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4793573f620f"
	I1025 16:13:39.888920   12967 logs.go:123] Gathering logs for kube-apiserver [c0de5082f75b] ...
	I1025 16:13:39.888934   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0de5082f75b"
	I1025 16:13:39.914956   12967 logs.go:123] Gathering logs for kube-scheduler [8b920158411f] ...
	I1025 16:13:39.914969   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b920158411f"
	I1025 16:13:39.926367   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:13:39.926380   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:13:39.952763   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:13:39.952774   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:13:39.966080   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:13:39.966093   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:13:40.006503   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:13:40.006513   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:13:40.011115   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:13:40.011123   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:13:40.056438   12967 logs.go:123] Gathering logs for kube-apiserver [051c2bdfeab6] ...
	I1025 16:13:40.056451   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 051c2bdfeab6"
	I1025 16:13:40.070659   12967 logs.go:123] Gathering logs for coredns [f1c00fb6a691] ...
	I1025 16:13:40.070671   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c00fb6a691"
	I1025 16:13:40.082064   12967 logs.go:123] Gathering logs for storage-provisioner [d638f46e9df9] ...
	I1025 16:13:40.082075   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d638f46e9df9"
	I1025 16:13:42.595777   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:13:47.596316   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:13:47.596435   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:13:47.607707   12967 logs.go:282] 2 containers: [051c2bdfeab6 c0de5082f75b]
	I1025 16:13:47.607782   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:13:47.619502   12967 logs.go:282] 2 containers: [256aa04c6fe0 cf10a1d31713]
	I1025 16:13:47.619595   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:13:47.630587   12967 logs.go:282] 1 containers: [f1c00fb6a691]
	I1025 16:13:47.630666   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:13:47.644052   12967 logs.go:282] 2 containers: [3324ee378a14 8b920158411f]
	I1025 16:13:47.644142   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:13:47.657381   12967 logs.go:282] 1 containers: [6126e2846c92]
	I1025 16:13:47.657466   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:13:47.669178   12967 logs.go:282] 2 containers: [23bd841a0a0f f7d087d3ed95]
	I1025 16:13:47.669262   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:13:47.680919   12967 logs.go:282] 0 containers: []
	W1025 16:13:47.680935   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:13:47.681010   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:13:47.693139   12967 logs.go:282] 2 containers: [4793573f620f d638f46e9df9]
	I1025 16:13:47.693158   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:13:47.693167   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:13:47.698257   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:13:47.698269   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:13:47.747663   12967 logs.go:123] Gathering logs for kube-apiserver [c0de5082f75b] ...
	I1025 16:13:47.747674   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0de5082f75b"
	I1025 16:13:47.775442   12967 logs.go:123] Gathering logs for etcd [cf10a1d31713] ...
	I1025 16:13:47.775462   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10a1d31713"
	I1025 16:13:47.794613   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:13:47.794625   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:13:47.823503   12967 logs.go:123] Gathering logs for kube-controller-manager [f7d087d3ed95] ...
	I1025 16:13:47.823523   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7d087d3ed95"
	I1025 16:13:47.841267   12967 logs.go:123] Gathering logs for storage-provisioner [4793573f620f] ...
	I1025 16:13:47.841283   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4793573f620f"
	I1025 16:13:47.855428   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:13:47.855440   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:13:47.902135   12967 logs.go:123] Gathering logs for kube-apiserver [051c2bdfeab6] ...
	I1025 16:13:47.902163   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 051c2bdfeab6"
	I1025 16:13:47.919706   12967 logs.go:123] Gathering logs for etcd [256aa04c6fe0] ...
	I1025 16:13:47.919720   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 256aa04c6fe0"
	I1025 16:13:47.937694   12967 logs.go:123] Gathering logs for coredns [f1c00fb6a691] ...
	I1025 16:13:47.937707   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c00fb6a691"
	I1025 16:13:47.951521   12967 logs.go:123] Gathering logs for kube-proxy [6126e2846c92] ...
	I1025 16:13:47.951535   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6126e2846c92"
	I1025 16:13:47.965269   12967 logs.go:123] Gathering logs for kube-controller-manager [23bd841a0a0f] ...
	I1025 16:13:47.965282   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bd841a0a0f"
	I1025 16:13:47.994199   12967 logs.go:123] Gathering logs for storage-provisioner [d638f46e9df9] ...
	I1025 16:13:47.994216   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d638f46e9df9"
	I1025 16:13:48.012772   12967 logs.go:123] Gathering logs for kube-scheduler [3324ee378a14] ...
	I1025 16:13:48.012787   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3324ee378a14"
	I1025 16:13:48.027239   12967 logs.go:123] Gathering logs for kube-scheduler [8b920158411f] ...
	I1025 16:13:48.027250   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b920158411f"
	I1025 16:13:48.041077   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:13:48.041091   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:13:50.557138   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:13:55.559527   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:13:55.559966   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:13:55.592415   12967 logs.go:282] 2 containers: [051c2bdfeab6 c0de5082f75b]
	I1025 16:13:55.592570   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:13:55.611174   12967 logs.go:282] 2 containers: [256aa04c6fe0 cf10a1d31713]
	I1025 16:13:55.611280   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:13:55.625062   12967 logs.go:282] 1 containers: [f1c00fb6a691]
	I1025 16:13:55.625146   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:13:55.636982   12967 logs.go:282] 2 containers: [3324ee378a14 8b920158411f]
	I1025 16:13:55.637068   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:13:55.647507   12967 logs.go:282] 1 containers: [6126e2846c92]
	I1025 16:13:55.647584   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:13:55.657938   12967 logs.go:282] 2 containers: [23bd841a0a0f f7d087d3ed95]
	I1025 16:13:55.658026   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:13:55.668783   12967 logs.go:282] 0 containers: []
	W1025 16:13:55.668797   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:13:55.668877   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:13:55.679437   12967 logs.go:282] 2 containers: [4793573f620f d638f46e9df9]
	I1025 16:13:55.679454   12967 logs.go:123] Gathering logs for storage-provisioner [4793573f620f] ...
	I1025 16:13:55.679459   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4793573f620f"
	I1025 16:13:55.690645   12967 logs.go:123] Gathering logs for kube-controller-manager [f7d087d3ed95] ...
	I1025 16:13:55.690654   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7d087d3ed95"
	I1025 16:13:55.702382   12967 logs.go:123] Gathering logs for kube-apiserver [c0de5082f75b] ...
	I1025 16:13:55.702392   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0de5082f75b"
	I1025 16:13:55.734125   12967 logs.go:123] Gathering logs for kube-scheduler [3324ee378a14] ...
	I1025 16:13:55.734142   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3324ee378a14"
	I1025 16:13:55.746298   12967 logs.go:123] Gathering logs for kube-scheduler [8b920158411f] ...
	I1025 16:13:55.746308   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b920158411f"
	I1025 16:13:55.757950   12967 logs.go:123] Gathering logs for kube-controller-manager [23bd841a0a0f] ...
	I1025 16:13:55.757963   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bd841a0a0f"
	I1025 16:13:55.774469   12967 logs.go:123] Gathering logs for storage-provisioner [d638f46e9df9] ...
	I1025 16:13:55.774480   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d638f46e9df9"
	I1025 16:13:55.786314   12967 logs.go:123] Gathering logs for kube-apiserver [051c2bdfeab6] ...
	I1025 16:13:55.786324   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 051c2bdfeab6"
	I1025 16:13:55.800374   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:13:55.800382   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:13:55.834351   12967 logs.go:123] Gathering logs for etcd [256aa04c6fe0] ...
	I1025 16:13:55.834362   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 256aa04c6fe0"
	I1025 16:13:55.848395   12967 logs.go:123] Gathering logs for etcd [cf10a1d31713] ...
	I1025 16:13:55.848405   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10a1d31713"
	I1025 16:13:55.861404   12967 logs.go:123] Gathering logs for coredns [f1c00fb6a691] ...
	I1025 16:13:55.861419   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c00fb6a691"
	I1025 16:13:55.873554   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:13:55.873565   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:13:55.878605   12967 logs.go:123] Gathering logs for kube-proxy [6126e2846c92] ...
	I1025 16:13:55.878612   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6126e2846c92"
	I1025 16:13:55.890830   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:13:55.890841   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:13:55.916086   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:13:55.916102   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:13:55.928533   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:13:55.928544   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:13:58.469919   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:14:03.472132   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:14:03.472265   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:14:03.483794   12967 logs.go:282] 2 containers: [051c2bdfeab6 c0de5082f75b]
	I1025 16:14:03.483879   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:14:03.494548   12967 logs.go:282] 2 containers: [256aa04c6fe0 cf10a1d31713]
	I1025 16:14:03.494636   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:14:03.505702   12967 logs.go:282] 1 containers: [f1c00fb6a691]
	I1025 16:14:03.505867   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:14:03.516773   12967 logs.go:282] 2 containers: [3324ee378a14 8b920158411f]
	I1025 16:14:03.516851   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:14:03.527375   12967 logs.go:282] 1 containers: [6126e2846c92]
	I1025 16:14:03.527448   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:14:03.537927   12967 logs.go:282] 2 containers: [23bd841a0a0f f7d087d3ed95]
	I1025 16:14:03.537997   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:14:03.548972   12967 logs.go:282] 0 containers: []
	W1025 16:14:03.548982   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:14:03.549042   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:14:03.560489   12967 logs.go:282] 2 containers: [4793573f620f d638f46e9df9]
	I1025 16:14:03.560506   12967 logs.go:123] Gathering logs for kube-apiserver [c0de5082f75b] ...
	I1025 16:14:03.560511   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0de5082f75b"
	I1025 16:14:03.585445   12967 logs.go:123] Gathering logs for etcd [256aa04c6fe0] ...
	I1025 16:14:03.585459   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 256aa04c6fe0"
	I1025 16:14:03.623200   12967 logs.go:123] Gathering logs for kube-scheduler [3324ee378a14] ...
	I1025 16:14:03.623209   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3324ee378a14"
	I1025 16:14:03.635611   12967 logs.go:123] Gathering logs for kube-controller-manager [23bd841a0a0f] ...
	I1025 16:14:03.635622   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bd841a0a0f"
	I1025 16:14:03.661540   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:14:03.661554   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:14:03.687683   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:14:03.687690   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:14:03.699826   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:14:03.699839   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:14:03.736791   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:14:03.736803   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:14:03.777990   12967 logs.go:123] Gathering logs for kube-scheduler [8b920158411f] ...
	I1025 16:14:03.777999   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b920158411f"
	I1025 16:14:03.789736   12967 logs.go:123] Gathering logs for storage-provisioner [4793573f620f] ...
	I1025 16:14:03.789748   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4793573f620f"
	I1025 16:14:03.801703   12967 logs.go:123] Gathering logs for kube-apiserver [051c2bdfeab6] ...
	I1025 16:14:03.801712   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 051c2bdfeab6"
	I1025 16:14:03.816114   12967 logs.go:123] Gathering logs for etcd [cf10a1d31713] ...
	I1025 16:14:03.816123   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10a1d31713"
	I1025 16:14:03.829658   12967 logs.go:123] Gathering logs for coredns [f1c00fb6a691] ...
	I1025 16:14:03.829667   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c00fb6a691"
	I1025 16:14:03.841074   12967 logs.go:123] Gathering logs for kube-proxy [6126e2846c92] ...
	I1025 16:14:03.841084   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6126e2846c92"
	I1025 16:14:03.853035   12967 logs.go:123] Gathering logs for kube-controller-manager [f7d087d3ed95] ...
	I1025 16:14:03.853045   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7d087d3ed95"
	I1025 16:14:03.864684   12967 logs.go:123] Gathering logs for storage-provisioner [d638f46e9df9] ...
	I1025 16:14:03.864693   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d638f46e9df9"
	I1025 16:14:03.878899   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:14:03.878910   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:14:06.385199   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:14:11.387858   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:14:11.387984   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:14:11.399081   12967 logs.go:282] 2 containers: [051c2bdfeab6 c0de5082f75b]
	I1025 16:14:11.399168   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:14:11.413380   12967 logs.go:282] 2 containers: [256aa04c6fe0 cf10a1d31713]
	I1025 16:14:11.413471   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:14:11.425537   12967 logs.go:282] 1 containers: [f1c00fb6a691]
	I1025 16:14:11.425625   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:14:11.436911   12967 logs.go:282] 2 containers: [3324ee378a14 8b920158411f]
	I1025 16:14:11.436998   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:14:11.448598   12967 logs.go:282] 1 containers: [6126e2846c92]
	I1025 16:14:11.448679   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:14:11.459328   12967 logs.go:282] 2 containers: [23bd841a0a0f f7d087d3ed95]
	I1025 16:14:11.459415   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:14:11.469878   12967 logs.go:282] 0 containers: []
	W1025 16:14:11.469889   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:14:11.469958   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:14:11.480595   12967 logs.go:282] 2 containers: [4793573f620f d638f46e9df9]
	I1025 16:14:11.480611   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:14:11.480617   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:14:11.489580   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:14:11.489590   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:14:11.528354   12967 logs.go:123] Gathering logs for kube-scheduler [8b920158411f] ...
	I1025 16:14:11.528369   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b920158411f"
	I1025 16:14:11.540010   12967 logs.go:123] Gathering logs for storage-provisioner [4793573f620f] ...
	I1025 16:14:11.540021   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4793573f620f"
	I1025 16:14:11.552467   12967 logs.go:123] Gathering logs for storage-provisioner [d638f46e9df9] ...
	I1025 16:14:11.552479   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d638f46e9df9"
	I1025 16:14:11.570943   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:14:11.570954   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:14:11.588133   12967 logs.go:123] Gathering logs for kube-apiserver [c0de5082f75b] ...
	I1025 16:14:11.588149   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0de5082f75b"
	I1025 16:14:11.615441   12967 logs.go:123] Gathering logs for etcd [cf10a1d31713] ...
	I1025 16:14:11.615458   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10a1d31713"
	I1025 16:14:11.630343   12967 logs.go:123] Gathering logs for coredns [f1c00fb6a691] ...
	I1025 16:14:11.630356   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c00fb6a691"
	I1025 16:14:11.642645   12967 logs.go:123] Gathering logs for kube-proxy [6126e2846c92] ...
	I1025 16:14:11.642656   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6126e2846c92"
	I1025 16:14:11.654901   12967 logs.go:123] Gathering logs for kube-apiserver [051c2bdfeab6] ...
	I1025 16:14:11.654913   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 051c2bdfeab6"
	I1025 16:14:11.669937   12967 logs.go:123] Gathering logs for etcd [256aa04c6fe0] ...
	I1025 16:14:11.669952   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 256aa04c6fe0"
	I1025 16:14:11.684277   12967 logs.go:123] Gathering logs for kube-scheduler [3324ee378a14] ...
	I1025 16:14:11.684287   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3324ee378a14"
	I1025 16:14:11.696089   12967 logs.go:123] Gathering logs for kube-controller-manager [f7d087d3ed95] ...
	I1025 16:14:11.696104   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7d087d3ed95"
	I1025 16:14:11.707944   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:14:11.707974   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:14:11.752678   12967 logs.go:123] Gathering logs for kube-controller-manager [23bd841a0a0f] ...
	I1025 16:14:11.752698   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bd841a0a0f"
	I1025 16:14:11.771968   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:14:11.771980   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:14:14.298553   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:14:19.301079   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:14:19.301211   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:14:19.312759   12967 logs.go:282] 2 containers: [051c2bdfeab6 c0de5082f75b]
	I1025 16:14:19.312842   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:14:19.322917   12967 logs.go:282] 2 containers: [256aa04c6fe0 cf10a1d31713]
	I1025 16:14:19.322999   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:14:19.333078   12967 logs.go:282] 1 containers: [f1c00fb6a691]
	I1025 16:14:19.333163   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:14:19.343512   12967 logs.go:282] 2 containers: [3324ee378a14 8b920158411f]
	I1025 16:14:19.343594   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:14:19.357170   12967 logs.go:282] 1 containers: [6126e2846c92]
	I1025 16:14:19.357247   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:14:19.368233   12967 logs.go:282] 2 containers: [23bd841a0a0f f7d087d3ed95]
	I1025 16:14:19.368315   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:14:19.378244   12967 logs.go:282] 0 containers: []
	W1025 16:14:19.378256   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:14:19.378324   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:14:19.388479   12967 logs.go:282] 2 containers: [4793573f620f d638f46e9df9]
	I1025 16:14:19.388493   12967 logs.go:123] Gathering logs for kube-proxy [6126e2846c92] ...
	I1025 16:14:19.388498   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6126e2846c92"
	I1025 16:14:19.401919   12967 logs.go:123] Gathering logs for storage-provisioner [4793573f620f] ...
	I1025 16:14:19.401934   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4793573f620f"
	I1025 16:14:19.413900   12967 logs.go:123] Gathering logs for kube-apiserver [051c2bdfeab6] ...
	I1025 16:14:19.413911   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 051c2bdfeab6"
	I1025 16:14:19.428269   12967 logs.go:123] Gathering logs for etcd [256aa04c6fe0] ...
	I1025 16:14:19.428279   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 256aa04c6fe0"
	I1025 16:14:19.442298   12967 logs.go:123] Gathering logs for kube-scheduler [3324ee378a14] ...
	I1025 16:14:19.442312   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3324ee378a14"
	I1025 16:14:19.453957   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:14:19.453968   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:14:19.496471   12967 logs.go:123] Gathering logs for kube-apiserver [c0de5082f75b] ...
	I1025 16:14:19.496479   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0de5082f75b"
	I1025 16:14:19.527360   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:14:19.527370   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:14:19.538959   12967 logs.go:123] Gathering logs for kube-controller-manager [23bd841a0a0f] ...
	I1025 16:14:19.538968   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bd841a0a0f"
	I1025 16:14:19.555913   12967 logs.go:123] Gathering logs for kube-controller-manager [f7d087d3ed95] ...
	I1025 16:14:19.555924   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7d087d3ed95"
	I1025 16:14:19.567188   12967 logs.go:123] Gathering logs for storage-provisioner [d638f46e9df9] ...
	I1025 16:14:19.567199   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d638f46e9df9"
	I1025 16:14:19.578805   12967 logs.go:123] Gathering logs for coredns [f1c00fb6a691] ...
	I1025 16:14:19.578816   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c00fb6a691"
	I1025 16:14:19.591293   12967 logs.go:123] Gathering logs for kube-scheduler [8b920158411f] ...
	I1025 16:14:19.591302   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b920158411f"
	I1025 16:14:19.602932   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:14:19.602944   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:14:19.625479   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:14:19.625485   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:14:19.630074   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:14:19.630083   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:14:19.666279   12967 logs.go:123] Gathering logs for etcd [cf10a1d31713] ...
	I1025 16:14:19.666292   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10a1d31713"
	I1025 16:14:22.182078   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:14:27.183239   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:14:27.183903   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:14:27.222221   12967 logs.go:282] 2 containers: [051c2bdfeab6 c0de5082f75b]
	I1025 16:14:27.222389   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:14:27.244351   12967 logs.go:282] 2 containers: [256aa04c6fe0 cf10a1d31713]
	I1025 16:14:27.244481   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:14:27.260130   12967 logs.go:282] 1 containers: [f1c00fb6a691]
	I1025 16:14:27.260223   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:14:27.272504   12967 logs.go:282] 2 containers: [3324ee378a14 8b920158411f]
	I1025 16:14:27.272588   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:14:27.284723   12967 logs.go:282] 1 containers: [6126e2846c92]
	I1025 16:14:27.284804   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:14:27.300873   12967 logs.go:282] 2 containers: [23bd841a0a0f f7d087d3ed95]
	I1025 16:14:27.300957   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:14:27.316645   12967 logs.go:282] 0 containers: []
	W1025 16:14:27.316657   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:14:27.316736   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:14:27.328656   12967 logs.go:282] 2 containers: [4793573f620f d638f46e9df9]
	I1025 16:14:27.328678   12967 logs.go:123] Gathering logs for storage-provisioner [d638f46e9df9] ...
	I1025 16:14:27.328683   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d638f46e9df9"
	I1025 16:14:27.341535   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:14:27.341549   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:14:27.366043   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:14:27.366063   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:14:27.370888   12967 logs.go:123] Gathering logs for kube-apiserver [051c2bdfeab6] ...
	I1025 16:14:27.370896   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 051c2bdfeab6"
	I1025 16:14:27.387378   12967 logs.go:123] Gathering logs for etcd [cf10a1d31713] ...
	I1025 16:14:27.387390   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10a1d31713"
	I1025 16:14:27.400604   12967 logs.go:123] Gathering logs for kube-scheduler [3324ee378a14] ...
	I1025 16:14:27.400614   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3324ee378a14"
	I1025 16:14:27.418637   12967 logs.go:123] Gathering logs for storage-provisioner [4793573f620f] ...
	I1025 16:14:27.418648   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4793573f620f"
	I1025 16:14:27.431084   12967 logs.go:123] Gathering logs for etcd [256aa04c6fe0] ...
	I1025 16:14:27.431094   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 256aa04c6fe0"
	I1025 16:14:27.445226   12967 logs.go:123] Gathering logs for coredns [f1c00fb6a691] ...
	I1025 16:14:27.445236   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c00fb6a691"
	I1025 16:14:27.457123   12967 logs.go:123] Gathering logs for kube-proxy [6126e2846c92] ...
	I1025 16:14:27.457134   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6126e2846c92"
	I1025 16:14:27.468704   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:14:27.468714   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:14:27.512644   12967 logs.go:123] Gathering logs for kube-scheduler [8b920158411f] ...
	I1025 16:14:27.512660   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b920158411f"
	I1025 16:14:27.524198   12967 logs.go:123] Gathering logs for kube-controller-manager [23bd841a0a0f] ...
	I1025 16:14:27.524212   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bd841a0a0f"
	I1025 16:14:27.543135   12967 logs.go:123] Gathering logs for kube-controller-manager [f7d087d3ed95] ...
	I1025 16:14:27.543147   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7d087d3ed95"
	I1025 16:14:27.560734   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:14:27.560757   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:14:27.573781   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:14:27.573796   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:14:27.613259   12967 logs.go:123] Gathering logs for kube-apiserver [c0de5082f75b] ...
	I1025 16:14:27.613273   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0de5082f75b"
	I1025 16:14:30.140669   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:14:35.143530   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:14:35.143755   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:14:35.161619   12967 logs.go:282] 2 containers: [051c2bdfeab6 c0de5082f75b]
	I1025 16:14:35.161709   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:14:35.172782   12967 logs.go:282] 2 containers: [256aa04c6fe0 cf10a1d31713]
	I1025 16:14:35.172870   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:14:35.184013   12967 logs.go:282] 1 containers: [f1c00fb6a691]
	I1025 16:14:35.184095   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:14:35.195191   12967 logs.go:282] 2 containers: [3324ee378a14 8b920158411f]
	I1025 16:14:35.195268   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:14:35.206137   12967 logs.go:282] 1 containers: [6126e2846c92]
	I1025 16:14:35.206221   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:14:35.217263   12967 logs.go:282] 2 containers: [23bd841a0a0f f7d087d3ed95]
	I1025 16:14:35.217342   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:14:35.228763   12967 logs.go:282] 0 containers: []
	W1025 16:14:35.228776   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:14:35.228848   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:14:35.239352   12967 logs.go:282] 2 containers: [4793573f620f d638f46e9df9]
	I1025 16:14:35.239369   12967 logs.go:123] Gathering logs for kube-controller-manager [23bd841a0a0f] ...
	I1025 16:14:35.239375   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bd841a0a0f"
	I1025 16:14:35.257561   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:14:35.257574   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:14:35.281118   12967 logs.go:123] Gathering logs for kube-proxy [6126e2846c92] ...
	I1025 16:14:35.281125   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6126e2846c92"
	I1025 16:14:35.297011   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:14:35.297022   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:14:35.338793   12967 logs.go:123] Gathering logs for etcd [256aa04c6fe0] ...
	I1025 16:14:35.338803   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 256aa04c6fe0"
	I1025 16:14:35.353150   12967 logs.go:123] Gathering logs for etcd [cf10a1d31713] ...
	I1025 16:14:35.353163   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10a1d31713"
	I1025 16:14:35.366419   12967 logs.go:123] Gathering logs for coredns [f1c00fb6a691] ...
	I1025 16:14:35.366431   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c00fb6a691"
	I1025 16:14:35.377801   12967 logs.go:123] Gathering logs for kube-scheduler [8b920158411f] ...
	I1025 16:14:35.377812   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b920158411f"
	I1025 16:14:35.390212   12967 logs.go:123] Gathering logs for kube-apiserver [051c2bdfeab6] ...
	I1025 16:14:35.390223   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 051c2bdfeab6"
	I1025 16:14:35.404810   12967 logs.go:123] Gathering logs for kube-apiserver [c0de5082f75b] ...
	I1025 16:14:35.404822   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0de5082f75b"
	I1025 16:14:35.430264   12967 logs.go:123] Gathering logs for kube-scheduler [3324ee378a14] ...
	I1025 16:14:35.430274   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3324ee378a14"
	I1025 16:14:35.442580   12967 logs.go:123] Gathering logs for storage-provisioner [d638f46e9df9] ...
	I1025 16:14:35.442590   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d638f46e9df9"
	I1025 16:14:35.454273   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:14:35.454283   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:14:35.458921   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:14:35.458928   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:14:35.494171   12967 logs.go:123] Gathering logs for kube-controller-manager [f7d087d3ed95] ...
	I1025 16:14:35.494182   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7d087d3ed95"
	I1025 16:14:35.510308   12967 logs.go:123] Gathering logs for storage-provisioner [4793573f620f] ...
	I1025 16:14:35.510318   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4793573f620f"
	I1025 16:14:35.522646   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:14:35.522660   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:14:38.036619   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:14:43.038873   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:14:43.039079   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:14:43.053891   12967 logs.go:282] 2 containers: [051c2bdfeab6 c0de5082f75b]
	I1025 16:14:43.053978   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:14:43.065473   12967 logs.go:282] 2 containers: [256aa04c6fe0 cf10a1d31713]
	I1025 16:14:43.065558   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:14:43.076263   12967 logs.go:282] 1 containers: [f1c00fb6a691]
	I1025 16:14:43.076339   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:14:43.086506   12967 logs.go:282] 2 containers: [3324ee378a14 8b920158411f]
	I1025 16:14:43.086593   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:14:43.097455   12967 logs.go:282] 1 containers: [6126e2846c92]
	I1025 16:14:43.097527   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:14:43.109042   12967 logs.go:282] 2 containers: [23bd841a0a0f f7d087d3ed95]
	I1025 16:14:43.109127   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:14:43.119286   12967 logs.go:282] 0 containers: []
	W1025 16:14:43.119298   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:14:43.119366   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:14:43.135891   12967 logs.go:282] 2 containers: [4793573f620f d638f46e9df9]
	I1025 16:14:43.135908   12967 logs.go:123] Gathering logs for storage-provisioner [d638f46e9df9] ...
	I1025 16:14:43.135912   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d638f46e9df9"
	I1025 16:14:43.147047   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:14:43.147058   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:14:43.186612   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:14:43.186622   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:14:43.190960   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:14:43.190969   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:14:43.224986   12967 logs.go:123] Gathering logs for kube-apiserver [051c2bdfeab6] ...
	I1025 16:14:43.224999   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 051c2bdfeab6"
	I1025 16:14:43.243356   12967 logs.go:123] Gathering logs for kube-apiserver [c0de5082f75b] ...
	I1025 16:14:43.243365   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0de5082f75b"
	I1025 16:14:43.267645   12967 logs.go:123] Gathering logs for kube-proxy [6126e2846c92] ...
	I1025 16:14:43.267654   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6126e2846c92"
	I1025 16:14:43.278968   12967 logs.go:123] Gathering logs for kube-controller-manager [23bd841a0a0f] ...
	I1025 16:14:43.278977   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bd841a0a0f"
	I1025 16:14:43.298713   12967 logs.go:123] Gathering logs for etcd [cf10a1d31713] ...
	I1025 16:14:43.298726   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10a1d31713"
	I1025 16:14:43.318607   12967 logs.go:123] Gathering logs for coredns [f1c00fb6a691] ...
	I1025 16:14:43.318616   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c00fb6a691"
	I1025 16:14:43.329781   12967 logs.go:123] Gathering logs for storage-provisioner [4793573f620f] ...
	I1025 16:14:43.329789   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4793573f620f"
	I1025 16:14:43.344297   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:14:43.344308   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:14:43.355876   12967 logs.go:123] Gathering logs for etcd [256aa04c6fe0] ...
	I1025 16:14:43.355889   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 256aa04c6fe0"
	I1025 16:14:43.373691   12967 logs.go:123] Gathering logs for kube-scheduler [8b920158411f] ...
	I1025 16:14:43.373702   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b920158411f"
	I1025 16:14:43.385368   12967 logs.go:123] Gathering logs for kube-controller-manager [f7d087d3ed95] ...
	I1025 16:14:43.385378   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7d087d3ed95"
	I1025 16:14:43.400378   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:14:43.400390   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:14:43.425512   12967 logs.go:123] Gathering logs for kube-scheduler [3324ee378a14] ...
	I1025 16:14:43.425526   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3324ee378a14"
	I1025 16:14:45.943503   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:14:50.946226   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:14:50.946746   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:14:50.985744   12967 logs.go:282] 2 containers: [051c2bdfeab6 c0de5082f75b]
	I1025 16:14:50.985922   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:14:51.009147   12967 logs.go:282] 2 containers: [256aa04c6fe0 cf10a1d31713]
	I1025 16:14:51.009274   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:14:51.023660   12967 logs.go:282] 1 containers: [f1c00fb6a691]
	I1025 16:14:51.023745   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:14:51.035992   12967 logs.go:282] 2 containers: [3324ee378a14 8b920158411f]
	I1025 16:14:51.036083   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:14:51.046749   12967 logs.go:282] 1 containers: [6126e2846c92]
	I1025 16:14:51.046826   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:14:51.057651   12967 logs.go:282] 2 containers: [23bd841a0a0f f7d087d3ed95]
	I1025 16:14:51.057729   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:14:51.068627   12967 logs.go:282] 0 containers: []
	W1025 16:14:51.068642   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:14:51.068712   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:14:51.084002   12967 logs.go:282] 2 containers: [4793573f620f d638f46e9df9]
	I1025 16:14:51.084020   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:14:51.084025   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:14:51.096531   12967 logs.go:123] Gathering logs for etcd [256aa04c6fe0] ...
	I1025 16:14:51.096548   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 256aa04c6fe0"
	I1025 16:14:51.111410   12967 logs.go:123] Gathering logs for kube-scheduler [8b920158411f] ...
	I1025 16:14:51.111420   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b920158411f"
	I1025 16:14:51.123165   12967 logs.go:123] Gathering logs for kube-proxy [6126e2846c92] ...
	I1025 16:14:51.123174   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6126e2846c92"
	I1025 16:14:51.135382   12967 logs.go:123] Gathering logs for kube-controller-manager [f7d087d3ed95] ...
	I1025 16:14:51.135395   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7d087d3ed95"
	I1025 16:14:51.147094   12967 logs.go:123] Gathering logs for storage-provisioner [4793573f620f] ...
	I1025 16:14:51.147108   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4793573f620f"
	I1025 16:14:51.158842   12967 logs.go:123] Gathering logs for kube-scheduler [3324ee378a14] ...
	I1025 16:14:51.158853   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3324ee378a14"
	I1025 16:14:51.171145   12967 logs.go:123] Gathering logs for storage-provisioner [d638f46e9df9] ...
	I1025 16:14:51.171156   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d638f46e9df9"
	I1025 16:14:51.183741   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:14:51.183753   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:14:51.207862   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:14:51.207880   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:14:51.250011   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:14:51.250031   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:14:51.254552   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:14:51.254563   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:14:51.293569   12967 logs.go:123] Gathering logs for kube-apiserver [051c2bdfeab6] ...
	I1025 16:14:51.293579   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 051c2bdfeab6"
	I1025 16:14:51.313877   12967 logs.go:123] Gathering logs for kube-apiserver [c0de5082f75b] ...
	I1025 16:14:51.313888   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0de5082f75b"
	I1025 16:14:51.339507   12967 logs.go:123] Gathering logs for coredns [f1c00fb6a691] ...
	I1025 16:14:51.339521   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c00fb6a691"
	I1025 16:14:51.351361   12967 logs.go:123] Gathering logs for etcd [cf10a1d31713] ...
	I1025 16:14:51.351374   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10a1d31713"
	I1025 16:14:51.365987   12967 logs.go:123] Gathering logs for kube-controller-manager [23bd841a0a0f] ...
	I1025 16:14:51.366001   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bd841a0a0f"
	I1025 16:14:53.885422   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:14:58.887580   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:14:58.887681   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:14:58.898695   12967 logs.go:282] 2 containers: [051c2bdfeab6 c0de5082f75b]
	I1025 16:14:58.898779   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:14:58.909622   12967 logs.go:282] 2 containers: [256aa04c6fe0 cf10a1d31713]
	I1025 16:14:58.909705   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:14:58.920475   12967 logs.go:282] 1 containers: [f1c00fb6a691]
	I1025 16:14:58.920561   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:14:58.931548   12967 logs.go:282] 2 containers: [3324ee378a14 8b920158411f]
	I1025 16:14:58.931628   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:14:58.942486   12967 logs.go:282] 1 containers: [6126e2846c92]
	I1025 16:14:58.942590   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:14:58.953684   12967 logs.go:282] 2 containers: [23bd841a0a0f f7d087d3ed95]
	I1025 16:14:58.953788   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:14:58.964393   12967 logs.go:282] 0 containers: []
	W1025 16:14:58.964403   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:14:58.964472   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:14:58.975232   12967 logs.go:282] 2 containers: [4793573f620f d638f46e9df9]
	I1025 16:14:58.975249   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:14:58.975254   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:14:59.020274   12967 logs.go:123] Gathering logs for kube-scheduler [3324ee378a14] ...
	I1025 16:14:59.020286   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3324ee378a14"
	I1025 16:14:59.032392   12967 logs.go:123] Gathering logs for storage-provisioner [4793573f620f] ...
	I1025 16:14:59.032402   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4793573f620f"
	I1025 16:14:59.043979   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:14:59.043995   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:14:59.067941   12967 logs.go:123] Gathering logs for kube-apiserver [c0de5082f75b] ...
	I1025 16:14:59.067950   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0de5082f75b"
	I1025 16:14:59.092741   12967 logs.go:123] Gathering logs for etcd [cf10a1d31713] ...
	I1025 16:14:59.092780   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10a1d31713"
	I1025 16:14:59.106179   12967 logs.go:123] Gathering logs for kube-controller-manager [23bd841a0a0f] ...
	I1025 16:14:59.106189   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bd841a0a0f"
	I1025 16:14:59.127161   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:14:59.127175   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:14:59.163993   12967 logs.go:123] Gathering logs for etcd [256aa04c6fe0] ...
	I1025 16:14:59.164007   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 256aa04c6fe0"
	I1025 16:14:59.178608   12967 logs.go:123] Gathering logs for storage-provisioner [d638f46e9df9] ...
	I1025 16:14:59.178623   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d638f46e9df9"
	I1025 16:14:59.200120   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:14:59.200131   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:14:59.213055   12967 logs.go:123] Gathering logs for kube-proxy [6126e2846c92] ...
	I1025 16:14:59.213067   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6126e2846c92"
	I1025 16:14:59.225252   12967 logs.go:123] Gathering logs for kube-controller-manager [f7d087d3ed95] ...
	I1025 16:14:59.225264   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7d087d3ed95"
	I1025 16:14:59.237371   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:14:59.237381   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:14:59.242110   12967 logs.go:123] Gathering logs for kube-apiserver [051c2bdfeab6] ...
	I1025 16:14:59.242119   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 051c2bdfeab6"
	I1025 16:14:59.267435   12967 logs.go:123] Gathering logs for coredns [f1c00fb6a691] ...
	I1025 16:14:59.267449   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c00fb6a691"
	I1025 16:14:59.278758   12967 logs.go:123] Gathering logs for kube-scheduler [8b920158411f] ...
	I1025 16:14:59.278770   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b920158411f"
	I1025 16:15:01.796287   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:15:06.798756   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:15:06.798925   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:15:06.813993   12967 logs.go:282] 2 containers: [051c2bdfeab6 c0de5082f75b]
	I1025 16:15:06.814080   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:15:06.826393   12967 logs.go:282] 2 containers: [256aa04c6fe0 cf10a1d31713]
	I1025 16:15:06.826480   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:15:06.839452   12967 logs.go:282] 1 containers: [f1c00fb6a691]
	I1025 16:15:06.839532   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:15:06.850139   12967 logs.go:282] 2 containers: [3324ee378a14 8b920158411f]
	I1025 16:15:06.850219   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:15:06.861310   12967 logs.go:282] 1 containers: [6126e2846c92]
	I1025 16:15:06.861385   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:15:06.872197   12967 logs.go:282] 2 containers: [23bd841a0a0f f7d087d3ed95]
	I1025 16:15:06.872282   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:15:06.882212   12967 logs.go:282] 0 containers: []
	W1025 16:15:06.882226   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:15:06.882287   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:15:06.892834   12967 logs.go:282] 2 containers: [4793573f620f d638f46e9df9]
	I1025 16:15:06.892852   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:15:06.892858   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:15:06.897819   12967 logs.go:123] Gathering logs for etcd [cf10a1d31713] ...
	I1025 16:15:06.897828   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10a1d31713"
	I1025 16:15:06.911263   12967 logs.go:123] Gathering logs for coredns [f1c00fb6a691] ...
	I1025 16:15:06.911275   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c00fb6a691"
	I1025 16:15:06.922914   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:15:06.922929   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:15:06.958603   12967 logs.go:123] Gathering logs for kube-scheduler [8b920158411f] ...
	I1025 16:15:06.958614   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b920158411f"
	I1025 16:15:06.970429   12967 logs.go:123] Gathering logs for kube-controller-manager [23bd841a0a0f] ...
	I1025 16:15:06.970439   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bd841a0a0f"
	I1025 16:15:06.989202   12967 logs.go:123] Gathering logs for kube-controller-manager [f7d087d3ed95] ...
	I1025 16:15:06.989212   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7d087d3ed95"
	I1025 16:15:07.000449   12967 logs.go:123] Gathering logs for kube-apiserver [051c2bdfeab6] ...
	I1025 16:15:07.000460   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 051c2bdfeab6"
	I1025 16:15:07.017837   12967 logs.go:123] Gathering logs for kube-apiserver [c0de5082f75b] ...
	I1025 16:15:07.017850   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0de5082f75b"
	I1025 16:15:07.043522   12967 logs.go:123] Gathering logs for etcd [256aa04c6fe0] ...
	I1025 16:15:07.043533   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 256aa04c6fe0"
	I1025 16:15:07.058331   12967 logs.go:123] Gathering logs for kube-proxy [6126e2846c92] ...
	I1025 16:15:07.058341   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6126e2846c92"
	I1025 16:15:07.069717   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:15:07.069728   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:15:07.111583   12967 logs.go:123] Gathering logs for kube-scheduler [3324ee378a14] ...
	I1025 16:15:07.111595   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3324ee378a14"
	I1025 16:15:07.123498   12967 logs.go:123] Gathering logs for storage-provisioner [4793573f620f] ...
	I1025 16:15:07.123510   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4793573f620f"
	I1025 16:15:07.135262   12967 logs.go:123] Gathering logs for storage-provisioner [d638f46e9df9] ...
	I1025 16:15:07.135277   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d638f46e9df9"
	I1025 16:15:07.147135   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:15:07.147146   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:15:07.171348   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:15:07.171355   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:15:09.685090   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:15:14.687290   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:15:14.687476   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:15:14.698951   12967 logs.go:282] 2 containers: [051c2bdfeab6 c0de5082f75b]
	I1025 16:15:14.699036   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:15:14.710069   12967 logs.go:282] 2 containers: [256aa04c6fe0 cf10a1d31713]
	I1025 16:15:14.710149   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:15:14.720844   12967 logs.go:282] 1 containers: [f1c00fb6a691]
	I1025 16:15:14.720924   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:15:14.731631   12967 logs.go:282] 2 containers: [3324ee378a14 8b920158411f]
	I1025 16:15:14.731706   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:15:14.743108   12967 logs.go:282] 1 containers: [6126e2846c92]
	I1025 16:15:14.743186   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:15:14.754513   12967 logs.go:282] 2 containers: [23bd841a0a0f f7d087d3ed95]
	I1025 16:15:14.754599   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:15:14.766824   12967 logs.go:282] 0 containers: []
	W1025 16:15:14.766839   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:15:14.766908   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:15:14.777911   12967 logs.go:282] 2 containers: [4793573f620f d638f46e9df9]
	I1025 16:15:14.777931   12967 logs.go:123] Gathering logs for kube-scheduler [3324ee378a14] ...
	I1025 16:15:14.777936   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3324ee378a14"
	I1025 16:15:14.791262   12967 logs.go:123] Gathering logs for kube-scheduler [8b920158411f] ...
	I1025 16:15:14.791273   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b920158411f"
	I1025 16:15:14.805093   12967 logs.go:123] Gathering logs for kube-controller-manager [23bd841a0a0f] ...
	I1025 16:15:14.805104   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bd841a0a0f"
	I1025 16:15:14.822332   12967 logs.go:123] Gathering logs for storage-provisioner [4793573f620f] ...
	I1025 16:15:14.822342   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4793573f620f"
	I1025 16:15:14.833858   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:15:14.833870   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:15:14.858297   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:15:14.858308   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:15:14.870767   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:15:14.870778   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:15:14.911487   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:15:14.911497   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:15:14.946781   12967 logs.go:123] Gathering logs for etcd [cf10a1d31713] ...
	I1025 16:15:14.946794   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10a1d31713"
	I1025 16:15:14.961200   12967 logs.go:123] Gathering logs for kube-controller-manager [f7d087d3ed95] ...
	I1025 16:15:14.961211   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7d087d3ed95"
	I1025 16:15:14.972625   12967 logs.go:123] Gathering logs for storage-provisioner [d638f46e9df9] ...
	I1025 16:15:14.972636   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d638f46e9df9"
	I1025 16:15:14.983839   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:15:14.983849   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:15:14.988229   12967 logs.go:123] Gathering logs for kube-apiserver [051c2bdfeab6] ...
	I1025 16:15:14.988237   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 051c2bdfeab6"
	I1025 16:15:15.002010   12967 logs.go:123] Gathering logs for coredns [f1c00fb6a691] ...
	I1025 16:15:15.002018   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c00fb6a691"
	I1025 16:15:15.013446   12967 logs.go:123] Gathering logs for kube-proxy [6126e2846c92] ...
	I1025 16:15:15.013457   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6126e2846c92"
	I1025 16:15:15.029936   12967 logs.go:123] Gathering logs for kube-apiserver [c0de5082f75b] ...
	I1025 16:15:15.029946   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0de5082f75b"
	I1025 16:15:15.055265   12967 logs.go:123] Gathering logs for etcd [256aa04c6fe0] ...
	I1025 16:15:15.055276   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 256aa04c6fe0"
	I1025 16:15:17.571969   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:15:22.572737   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:15:22.572830   12967 kubeadm.go:597] duration metric: took 4m4.539642833s to restartPrimaryControlPlane
	W1025 16:15:22.572936   12967 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1025 16:15:22.572974   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1025 16:15:23.563938   12967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 16:15:23.569062   12967 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 16:15:23.572073   12967 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 16:15:23.574740   12967 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 16:15:23.574749   12967 kubeadm.go:157] found existing configuration files:
	
	I1025 16:15:23.574784   12967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62164 /etc/kubernetes/admin.conf
	I1025 16:15:23.577314   12967 kubeadm.go:163] "https://control-plane.minikube.internal:62164" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:62164 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 16:15:23.577342   12967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 16:15:23.580286   12967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62164 /etc/kubernetes/kubelet.conf
	I1025 16:15:23.582928   12967 kubeadm.go:163] "https://control-plane.minikube.internal:62164" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:62164 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 16:15:23.582951   12967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 16:15:23.585828   12967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62164 /etc/kubernetes/controller-manager.conf
	I1025 16:15:23.588921   12967 kubeadm.go:163] "https://control-plane.minikube.internal:62164" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:62164 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 16:15:23.588953   12967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 16:15:23.591944   12967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62164 /etc/kubernetes/scheduler.conf
	I1025 16:15:23.594475   12967 kubeadm.go:163] "https://control-plane.minikube.internal:62164" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:62164 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 16:15:23.594497   12967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 16:15:23.597632   12967 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1025 16:15:23.616423   12967 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1025 16:15:23.616447   12967 kubeadm.go:310] [preflight] Running pre-flight checks
	I1025 16:15:23.671046   12967 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 16:15:23.671105   12967 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 16:15:23.671162   12967 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 16:15:23.723298   12967 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 16:15:23.728352   12967 out.go:235]   - Generating certificates and keys ...
	I1025 16:15:23.728385   12967 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1025 16:15:23.728422   12967 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1025 16:15:23.728464   12967 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1025 16:15:23.728496   12967 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1025 16:15:23.728554   12967 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1025 16:15:23.728582   12967 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1025 16:15:23.728612   12967 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1025 16:15:23.728643   12967 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1025 16:15:23.728686   12967 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1025 16:15:23.728726   12967 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1025 16:15:23.728749   12967 kubeadm.go:310] [certs] Using the existing "sa" key
	I1025 16:15:23.728801   12967 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 16:15:23.805818   12967 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 16:15:23.978702   12967 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 16:15:24.028020   12967 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 16:15:24.154365   12967 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 16:15:24.188741   12967 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 16:15:24.189237   12967 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 16:15:24.189264   12967 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1025 16:15:24.277619   12967 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 16:15:24.281830   12967 out.go:235]   - Booting up control plane ...
	I1025 16:15:24.281872   12967 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 16:15:24.281924   12967 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 16:15:24.282004   12967 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 16:15:24.282049   12967 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 16:15:24.282201   12967 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 16:15:28.784100   12967 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501702 seconds
	I1025 16:15:28.784191   12967 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 16:15:28.789056   12967 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 16:15:29.301490   12967 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 16:15:29.301696   12967 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-023000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 16:15:29.804642   12967 kubeadm.go:310] [bootstrap-token] Using token: cmr7v0.1vzgagcd1x6m03eo
	I1025 16:15:29.810594   12967 out.go:235]   - Configuring RBAC rules ...
	I1025 16:15:29.810662   12967 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 16:15:29.810707   12967 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 16:15:29.814489   12967 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 16:15:29.815878   12967 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 16:15:29.816929   12967 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 16:15:29.817879   12967 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 16:15:29.820997   12967 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 16:15:30.015136   12967 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1025 16:15:30.210064   12967 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1025 16:15:30.210565   12967 kubeadm.go:310] 
	I1025 16:15:30.210602   12967 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1025 16:15:30.210608   12967 kubeadm.go:310] 
	I1025 16:15:30.210646   12967 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1025 16:15:30.210650   12967 kubeadm.go:310] 
	I1025 16:15:30.210700   12967 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1025 16:15:30.210738   12967 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 16:15:30.210771   12967 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 16:15:30.210805   12967 kubeadm.go:310] 
	I1025 16:15:30.210877   12967 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1025 16:15:30.210884   12967 kubeadm.go:310] 
	I1025 16:15:30.210912   12967 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 16:15:30.210917   12967 kubeadm.go:310] 
	I1025 16:15:30.210967   12967 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1025 16:15:30.211033   12967 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 16:15:30.211108   12967 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 16:15:30.211113   12967 kubeadm.go:310] 
	I1025 16:15:30.211164   12967 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 16:15:30.211207   12967 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1025 16:15:30.211211   12967 kubeadm.go:310] 
	I1025 16:15:30.211251   12967 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token cmr7v0.1vzgagcd1x6m03eo \
	I1025 16:15:30.211306   12967 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0ffd2fddcca542d38aed4b14aa54bdac916e7b257b7596865a537c11b5cfb0fe \
	I1025 16:15:30.211319   12967 kubeadm.go:310] 	--control-plane 
	I1025 16:15:30.211322   12967 kubeadm.go:310] 
	I1025 16:15:30.211363   12967 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1025 16:15:30.211367   12967 kubeadm.go:310] 
	I1025 16:15:30.211414   12967 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token cmr7v0.1vzgagcd1x6m03eo \
	I1025 16:15:30.211472   12967 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0ffd2fddcca542d38aed4b14aa54bdac916e7b257b7596865a537c11b5cfb0fe 
	I1025 16:15:30.211532   12967 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 16:15:30.211539   12967 cni.go:84] Creating CNI manager for ""
	I1025 16:15:30.211549   12967 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 16:15:30.218956   12967 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1025 16:15:30.229776   12967 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1025 16:15:30.232758   12967 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1025 16:15:30.237851   12967 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 16:15:30.237905   12967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 16:15:30.237923   12967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-023000 minikube.k8s.io/updated_at=2024_10_25T16_15_30_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=db65f53f04c460e02d289e77cb94648c116e89dc minikube.k8s.io/name=running-upgrade-023000 minikube.k8s.io/primary=true
	I1025 16:15:30.279791   12967 ops.go:34] apiserver oom_adj: -16
	I1025 16:15:30.280347   12967 kubeadm.go:1113] duration metric: took 42.486875ms to wait for elevateKubeSystemPrivileges
	I1025 16:15:30.280356   12967 kubeadm.go:394] duration metric: took 4m12.262003292s to StartCluster
	I1025 16:15:30.280364   12967 settings.go:142] acquiring lock: {Name:mkc7ffce42494ff0056038ca2482eba326c60c40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:15:30.280557   12967 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:15:30.280946   12967 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19758-10490/kubeconfig: {Name:mkab4c8ddad2dcb8cd5939090920ae3e3753785d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:15:30.281174   12967 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:15:30.281216   12967 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 16:15:30.281250   12967 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-023000"
	I1025 16:15:30.281263   12967 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-023000"
	W1025 16:15:30.281266   12967 addons.go:243] addon storage-provisioner should already be in state true
	I1025 16:15:30.281280   12967 host.go:66] Checking if "running-upgrade-023000" exists ...
	I1025 16:15:30.281279   12967 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-023000"
	I1025 16:15:30.281288   12967 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-023000"
	I1025 16:15:30.281371   12967 config.go:182] Loaded profile config "running-upgrade-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 16:15:30.282496   12967 kapi.go:59] client config for running-upgrade-023000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/running-upgrade-023000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/running-upgrade-023000/client.key", CAFile:"/Users/jenkins/minikube-integration/19758-10490/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104cbe510), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 16:15:30.282894   12967 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-023000"
	W1025 16:15:30.282899   12967 addons.go:243] addon default-storageclass should already be in state true
	I1025 16:15:30.282907   12967 host.go:66] Checking if "running-upgrade-023000" exists ...
	I1025 16:15:30.284957   12967 out.go:177] * Verifying Kubernetes components...
	I1025 16:15:30.285290   12967 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 16:15:30.289155   12967 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 16:15:30.289163   12967 sshutil.go:53] new ssh client: &{IP:localhost Port:62132 SSHKeyPath:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/running-upgrade-023000/id_rsa Username:docker}
	I1025 16:15:30.292955   12967 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 16:15:30.296911   12967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 16:15:30.300991   12967 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 16:15:30.300998   12967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 16:15:30.301005   12967 sshutil.go:53] new ssh client: &{IP:localhost Port:62132 SSHKeyPath:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/running-upgrade-023000/id_rsa Username:docker}
	I1025 16:15:30.389107   12967 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 16:15:30.394373   12967 api_server.go:52] waiting for apiserver process to appear ...
	I1025 16:15:30.394425   12967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 16:15:30.398795   12967 api_server.go:72] duration metric: took 117.606666ms to wait for apiserver process to appear ...
	I1025 16:15:30.398803   12967 api_server.go:88] waiting for apiserver healthz status ...
	I1025 16:15:30.398811   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:15:30.427333   12967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 16:15:30.454418   12967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 16:15:30.767569   12967 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1025 16:15:30.767582   12967 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1025 16:15:35.400939   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:15:35.401013   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:15:40.401432   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:15:40.401463   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:15:45.401929   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:15:45.401999   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:15:50.402643   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:15:50.402695   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:15:55.403213   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:15:55.403254   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:16:00.403805   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:16:00.403845   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1025 16:16:00.769793   12967 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1025 16:16:00.775025   12967 out.go:177] * Enabled addons: storage-provisioner
	I1025 16:16:00.782859   12967 addons.go:510] duration metric: took 30.501853667s for enable addons: enabled=[storage-provisioner]
	I1025 16:16:05.404940   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:16:05.404994   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:16:10.406487   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:16:10.406540   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:16:15.408535   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:16:15.408564   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:16:20.410760   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:16:20.410802   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:16:25.413117   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:16:25.413178   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:16:30.413488   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:16:30.413596   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:16:30.425286   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:16:30.425371   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:16:30.436688   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:16:30.436774   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:16:30.458882   12967 logs.go:282] 2 containers: [24408302c429 e4a8eaea1752]
	I1025 16:16:30.458981   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:16:30.475712   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:16:30.475801   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:16:30.488906   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:16:30.488995   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:16:30.500166   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:16:30.500253   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:16:30.511962   12967 logs.go:282] 0 containers: []
	W1025 16:16:30.511973   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:16:30.512047   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:16:30.523026   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:16:30.523041   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:16:30.523047   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:16:30.548576   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:16:30.548595   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:16:30.553858   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:16:30.553869   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:16:30.569299   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:16:30.569309   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:16:30.581593   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:16:30.581608   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:16:30.599109   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:16:30.599124   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:16:30.613798   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:16:30.613808   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:16:30.625993   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:16:30.626008   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:16:30.637566   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:16:30.637578   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:16:30.649421   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:16:30.649431   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:16:30.686232   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:16:30.686239   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:16:30.721057   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:16:30.721067   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:16:30.734957   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:16:30.734966   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:16:33.248487   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:16:38.249103   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:16:38.249198   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:16:38.261957   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:16:38.262042   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:16:38.274208   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:16:38.274291   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:16:38.286421   12967 logs.go:282] 2 containers: [24408302c429 e4a8eaea1752]
	I1025 16:16:38.286505   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:16:38.298238   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:16:38.298405   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:16:38.314004   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:16:38.314084   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:16:38.325115   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:16:38.325191   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:16:38.340268   12967 logs.go:282] 0 containers: []
	W1025 16:16:38.340279   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:16:38.340351   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:16:38.351883   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:16:38.351900   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:16:38.351905   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:16:38.367123   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:16:38.367133   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:16:38.387704   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:16:38.387720   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:16:38.407321   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:16:38.407335   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:16:38.421034   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:16:38.421046   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:16:38.434207   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:16:38.434219   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:16:38.447392   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:16:38.447403   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:16:38.472967   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:16:38.472981   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:16:38.510378   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:16:38.510391   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:16:38.515380   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:16:38.515388   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:16:38.549763   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:16:38.549773   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:16:38.563893   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:16:38.563903   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:16:38.575618   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:16:38.575633   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:16:41.089478   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:16:46.089929   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:16:46.089992   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:16:46.102306   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:16:46.102384   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:16:46.114057   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:16:46.114140   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:16:46.128563   12967 logs.go:282] 2 containers: [24408302c429 e4a8eaea1752]
	I1025 16:16:46.128641   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:16:46.140304   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:16:46.140389   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:16:46.152488   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:16:46.152578   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:16:46.163749   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:16:46.163825   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:16:46.174671   12967 logs.go:282] 0 containers: []
	W1025 16:16:46.174681   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:16:46.174750   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:16:46.185818   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:16:46.185836   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:16:46.185844   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:16:46.200904   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:16:46.200913   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:16:46.220022   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:16:46.220044   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:16:46.233379   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:16:46.233390   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:16:46.260913   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:16:46.260923   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:16:46.297693   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:16:46.297705   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:16:46.313608   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:16:46.313625   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:16:46.329939   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:16:46.329950   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:16:46.346502   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:16:46.346515   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:16:46.359229   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:16:46.359242   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:16:46.397254   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:16:46.397266   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:16:46.401994   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:16:46.402001   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:16:46.419858   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:16:46.419868   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:16:48.933410   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:16:53.935263   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:16:53.935468   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:16:53.952504   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:16:53.952588   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:16:53.970116   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:16:53.970193   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:16:53.981885   12967 logs.go:282] 2 containers: [24408302c429 e4a8eaea1752]
	I1025 16:16:53.981961   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:16:53.993571   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:16:53.993645   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:16:54.005180   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:16:54.005262   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:16:54.017470   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:16:54.017549   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:16:54.029124   12967 logs.go:282] 0 containers: []
	W1025 16:16:54.029135   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:16:54.029205   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:16:54.040325   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:16:54.040338   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:16:54.040343   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:16:54.078314   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:16:54.078324   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:16:54.092949   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:16:54.092961   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:16:54.108689   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:16:54.108704   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:16:54.127686   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:16:54.127696   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:16:54.142516   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:16:54.142527   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:16:54.154519   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:16:54.154530   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:16:54.159631   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:16:54.159639   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:16:54.196875   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:16:54.196890   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:16:54.213013   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:16:54.213029   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:16:54.225438   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:16:54.225448   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:16:54.250533   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:16:54.250547   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:16:54.278049   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:16:54.278076   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:16:56.792681   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:17:01.795043   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:17:01.795713   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:17:01.824338   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:17:01.824422   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:17:01.840908   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:17:01.840991   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:17:01.853791   12967 logs.go:282] 2 containers: [24408302c429 e4a8eaea1752]
	I1025 16:17:01.853859   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:17:01.865510   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:17:01.865610   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:17:01.877251   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:17:01.877337   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:17:01.888492   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:17:01.888566   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:17:01.899419   12967 logs.go:282] 0 containers: []
	W1025 16:17:01.899429   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:17:01.899492   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:17:01.911612   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:17:01.911625   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:17:01.911630   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:17:01.916544   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:17:01.916553   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:17:01.932471   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:17:01.932482   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:17:01.948654   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:17:01.948667   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:17:01.961549   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:17:01.961562   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:17:01.980956   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:17:01.980965   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:17:02.018283   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:17:02.018297   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:17:02.057636   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:17:02.057647   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:17:02.073572   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:17:02.073584   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:17:02.086851   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:17:02.086863   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:17:02.102603   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:17:02.102615   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:17:02.115343   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:17:02.115356   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:17:02.142661   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:17:02.142682   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:17:04.657244   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:17:09.659424   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:17:09.659638   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:17:09.673668   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:17:09.673764   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:17:09.684861   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:17:09.684940   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:17:09.694986   12967 logs.go:282] 2 containers: [24408302c429 e4a8eaea1752]
	I1025 16:17:09.695069   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:17:09.712184   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:17:09.712262   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:17:09.722318   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:17:09.722363   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:17:09.737535   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:17:09.737622   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:17:09.749853   12967 logs.go:282] 0 containers: []
	W1025 16:17:09.749866   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:17:09.749937   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:17:09.761559   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:17:09.761573   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:17:09.761579   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:17:09.766689   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:17:09.766700   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:17:09.779335   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:17:09.779345   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:17:09.795244   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:17:09.795257   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:17:09.808108   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:17:09.808125   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:17:09.846017   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:17:09.846028   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:17:09.860806   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:17:09.860816   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:17:09.880345   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:17:09.880356   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:17:09.898789   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:17:09.898800   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:17:09.919227   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:17:09.919235   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:17:09.931757   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:17:09.931768   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:17:09.957821   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:17:09.957832   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:17:09.972392   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:17:09.972404   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:17:12.515342   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:17:17.517585   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:17:17.517750   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:17:17.532454   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:17:17.532550   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:17:17.544593   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:17:17.544672   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:17:17.554938   12967 logs.go:282] 2 containers: [24408302c429 e4a8eaea1752]
	I1025 16:17:17.555022   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:17:17.565455   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:17:17.565529   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:17:17.575590   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:17:17.575666   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:17:17.586382   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:17:17.586530   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:17:17.596818   12967 logs.go:282] 0 containers: []
	W1025 16:17:17.596833   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:17:17.596899   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:17:17.607700   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:17:17.607711   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:17:17.607716   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:17:17.643975   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:17:17.643986   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:17:17.649007   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:17:17.649017   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:17:17.667965   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:17:17.667974   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:17:17.684235   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:17:17.684250   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:17:17.697540   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:17:17.697552   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:17:17.717756   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:17:17.717771   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:17:17.729976   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:17:17.729989   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:17:17.773516   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:17:17.773527   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:17:17.790034   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:17:17.790045   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:17:17.804028   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:17:17.804038   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:17:17.816291   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:17:17.816305   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:17:17.829505   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:17:17.829517   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:17:20.356896   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:17:25.359119   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:17:25.359306   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:17:25.377437   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:17:25.377547   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:17:25.391669   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:17:25.391747   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:17:25.405505   12967 logs.go:282] 2 containers: [24408302c429 e4a8eaea1752]
	I1025 16:17:25.405583   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:17:25.416142   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:17:25.416226   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:17:25.426432   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:17:25.426511   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:17:25.436802   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:17:25.436881   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:17:25.446802   12967 logs.go:282] 0 containers: []
	W1025 16:17:25.446817   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:17:25.446889   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:17:25.457106   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:17:25.457124   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:17:25.457131   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:17:25.468761   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:17:25.468771   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:17:25.480686   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:17:25.480702   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:17:25.495798   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:17:25.495810   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:17:25.512585   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:17:25.512601   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:17:25.525264   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:17:25.525274   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:17:25.544105   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:17:25.544117   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:17:25.557184   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:17:25.557193   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:17:25.569738   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:17:25.569750   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:17:25.595161   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:17:25.595170   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:17:25.631648   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:17:25.631656   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:17:25.636624   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:17:25.636635   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:17:25.680085   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:17:25.680097   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:17:28.196483   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:17:33.198622   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:17:33.198757   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:17:33.211340   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:17:33.211432   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:17:33.222613   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:17:33.222703   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:17:33.233180   12967 logs.go:282] 4 containers: [7f00f3bb70a3 2eee8f96b914 24408302c429 e4a8eaea1752]
	I1025 16:17:33.233271   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:17:33.243072   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:17:33.243156   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:17:33.253803   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:17:33.253886   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:17:33.264427   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:17:33.264505   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:17:33.274693   12967 logs.go:282] 0 containers: []
	W1025 16:17:33.274704   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:17:33.274769   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:17:33.285957   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:17:33.285977   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:17:33.285985   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:17:33.297559   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:17:33.297569   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:17:33.309768   12967 logs.go:123] Gathering logs for coredns [2eee8f96b914] ...
	I1025 16:17:33.309780   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2eee8f96b914"
	I1025 16:17:33.321174   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:17:33.321187   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:17:33.336865   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:17:33.336877   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:17:33.361785   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:17:33.361802   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:17:33.367012   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:17:33.367025   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:17:33.447727   12967 logs.go:123] Gathering logs for coredns [7f00f3bb70a3] ...
	I1025 16:17:33.447735   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f00f3bb70a3"
	I1025 16:17:33.459722   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:17:33.459735   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:17:33.479410   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:17:33.479420   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:17:33.495481   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:17:33.495493   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:17:33.532124   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:17:33.532133   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:17:33.548621   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:17:33.548634   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:17:33.563235   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:17:33.563248   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:17:33.576536   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:17:33.576548   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:17:36.091047   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:17:41.093115   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:17:41.093278   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:17:41.113018   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:17:41.113100   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:17:41.125511   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:17:41.125600   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:17:41.137585   12967 logs.go:282] 4 containers: [7f00f3bb70a3 2eee8f96b914 24408302c429 e4a8eaea1752]
	I1025 16:17:41.137670   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:17:41.148319   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:17:41.148395   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:17:41.164022   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:17:41.164117   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:17:41.174461   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:17:41.174540   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:17:41.185130   12967 logs.go:282] 0 containers: []
	W1025 16:17:41.185141   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:17:41.185212   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:17:41.195744   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:17:41.195765   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:17:41.195779   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:17:41.218973   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:17:41.218980   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:17:41.255367   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:17:41.255388   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:17:41.260570   12967 logs.go:123] Gathering logs for coredns [7f00f3bb70a3] ...
	I1025 16:17:41.260580   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f00f3bb70a3"
	I1025 16:17:41.272459   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:17:41.272471   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:17:41.289951   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:17:41.289967   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:17:41.302688   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:17:41.302698   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:17:41.344018   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:17:41.344028   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:17:41.362949   12967 logs.go:123] Gathering logs for coredns [2eee8f96b914] ...
	I1025 16:17:41.362960   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2eee8f96b914"
	I1025 16:17:41.376324   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:17:41.376337   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:17:41.400484   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:17:41.400498   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:17:41.415188   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:17:41.415198   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:17:41.431442   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:17:41.431451   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:17:41.448630   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:17:41.448645   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:17:41.461075   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:17:41.461091   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:17:43.975887   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:17:48.978197   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:17:48.978681   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:17:49.006202   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:17:49.006355   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:17:49.024484   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:17:49.024584   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:17:49.038807   12967 logs.go:282] 4 containers: [7f00f3bb70a3 2eee8f96b914 24408302c429 e4a8eaea1752]
	I1025 16:17:49.038898   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:17:49.050031   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:17:49.050111   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:17:49.063869   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:17:49.063953   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:17:49.074312   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:17:49.074388   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:17:49.089010   12967 logs.go:282] 0 containers: []
	W1025 16:17:49.089023   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:17:49.089096   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:17:49.099577   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:17:49.099595   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:17:49.099601   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:17:49.111248   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:17:49.111258   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:17:49.129483   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:17:49.129499   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:17:49.148397   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:17:49.148407   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:17:49.166894   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:17:49.166905   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:17:49.193741   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:17:49.193759   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:17:49.207076   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:17:49.207088   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:17:49.245291   12967 logs.go:123] Gathering logs for coredns [2eee8f96b914] ...
	I1025 16:17:49.245301   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2eee8f96b914"
	I1025 16:17:49.257682   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:17:49.257694   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:17:49.263030   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:17:49.263042   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:17:49.301814   12967 logs.go:123] Gathering logs for coredns [7f00f3bb70a3] ...
	I1025 16:17:49.301824   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f00f3bb70a3"
	I1025 16:17:49.318032   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:17:49.318042   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:17:49.330863   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:17:49.330875   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:17:49.343453   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:17:49.343465   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:17:49.359246   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:17:49.359258   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:17:51.877003   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:17:56.879611   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:17:56.879928   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:17:56.905797   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:17:56.905936   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:17:56.922865   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:17:56.922961   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:17:56.936039   12967 logs.go:282] 4 containers: [7f00f3bb70a3 2eee8f96b914 24408302c429 e4a8eaea1752]
	I1025 16:17:56.936130   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:17:56.947199   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:17:56.947279   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:17:56.957355   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:17:56.957442   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:17:56.967543   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:17:56.967625   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:17:56.982903   12967 logs.go:282] 0 containers: []
	W1025 16:17:56.982917   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:17:56.982987   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:17:56.993966   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:17:56.993983   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:17:56.993989   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:17:57.031403   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:17:57.031413   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:17:57.045868   12967 logs.go:123] Gathering logs for coredns [2eee8f96b914] ...
	I1025 16:17:57.045881   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2eee8f96b914"
	I1025 16:17:57.058638   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:17:57.058651   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:17:57.085020   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:17:57.085037   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:17:57.106114   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:17:57.106124   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:17:57.129288   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:17:57.129301   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:17:57.135025   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:17:57.135036   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:17:57.174471   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:17:57.174482   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:17:57.193099   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:17:57.193111   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:17:57.212372   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:17:57.212381   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:17:57.232425   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:17:57.232436   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:17:57.245903   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:17:57.245914   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:17:57.261051   12967 logs.go:123] Gathering logs for coredns [7f00f3bb70a3] ...
	I1025 16:17:57.261064   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f00f3bb70a3"
	I1025 16:17:57.273885   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:17:57.273893   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:17:59.803132   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:18:04.806003   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:18:04.806536   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:18:04.843840   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:18:04.844000   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:18:04.865195   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:18:04.865298   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:18:04.880462   12967 logs.go:282] 4 containers: [7f00f3bb70a3 2eee8f96b914 24408302c429 e4a8eaea1752]
	I1025 16:18:04.880559   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:18:04.892774   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:18:04.892855   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:18:04.904532   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:18:04.904613   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:18:04.915156   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:18:04.915202   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:18:04.926989   12967 logs.go:282] 0 containers: []
	W1025 16:18:04.926996   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:18:04.927037   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:18:04.943812   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:18:04.943832   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:18:04.943838   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:18:04.957191   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:18:04.957202   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:18:04.977421   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:18:04.977432   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:18:05.004398   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:18:05.004409   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:18:05.043049   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:18:05.043063   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:18:05.055557   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:18:05.055568   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:18:05.069439   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:18:05.069451   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:18:05.084841   12967 logs.go:123] Gathering logs for coredns [2eee8f96b914] ...
	I1025 16:18:05.084857   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2eee8f96b914"
	I1025 16:18:05.097151   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:18:05.097158   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:18:05.115162   12967 logs.go:123] Gathering logs for coredns [7f00f3bb70a3] ...
	I1025 16:18:05.115173   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f00f3bb70a3"
	I1025 16:18:05.130772   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:18:05.130784   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:18:05.170946   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:18:05.170959   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:18:05.188133   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:18:05.188147   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:18:05.201174   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:18:05.201185   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:18:05.214291   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:18:05.214303   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:18:07.721321   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:18:12.723518   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:18:12.723724   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:18:12.739543   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:18:12.739629   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:18:12.752461   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:18:12.752541   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:18:12.766796   12967 logs.go:282] 4 containers: [7f00f3bb70a3 2eee8f96b914 24408302c429 e4a8eaea1752]
	I1025 16:18:12.766873   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:18:12.781590   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:18:12.781673   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:18:12.791821   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:18:12.791905   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:18:12.802688   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:18:12.802765   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:18:12.814025   12967 logs.go:282] 0 containers: []
	W1025 16:18:12.814039   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:18:12.814112   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:18:12.825819   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:18:12.825836   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:18:12.825842   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:18:12.863591   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:18:12.863607   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:18:12.876742   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:18:12.876755   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:18:12.894201   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:18:12.894210   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:18:12.907460   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:18:12.907471   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:18:12.933599   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:18:12.933611   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:18:12.971998   12967 logs.go:123] Gathering logs for coredns [7f00f3bb70a3] ...
	I1025 16:18:12.972009   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f00f3bb70a3"
	I1025 16:18:12.985250   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:18:12.985261   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:18:12.998242   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:18:12.998255   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:18:13.021566   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:18:13.021576   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:18:13.035011   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:18:13.035020   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:18:13.040127   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:18:13.040139   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:18:13.054999   12967 logs.go:123] Gathering logs for coredns [2eee8f96b914] ...
	I1025 16:18:13.055011   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2eee8f96b914"
	I1025 16:18:13.068135   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:18:13.068147   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:18:13.083701   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:18:13.083712   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:18:15.597491   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:18:20.599929   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:18:20.600095   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:18:20.613695   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:18:20.613783   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:18:20.625205   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:18:20.625281   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:18:20.636368   12967 logs.go:282] 4 containers: [7f00f3bb70a3 2eee8f96b914 24408302c429 e4a8eaea1752]
	I1025 16:18:20.636455   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:18:20.646735   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:18:20.646812   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:18:20.663787   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:18:20.663862   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:18:20.674352   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:18:20.674431   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:18:20.685791   12967 logs.go:282] 0 containers: []
	W1025 16:18:20.685802   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:18:20.685876   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:18:20.697696   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:18:20.697710   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:18:20.697715   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:18:20.736862   12967 logs.go:123] Gathering logs for coredns [7f00f3bb70a3] ...
	I1025 16:18:20.736904   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f00f3bb70a3"
	I1025 16:18:20.750115   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:18:20.750126   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:18:20.767864   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:18:20.767875   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:18:20.808134   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:18:20.808142   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:18:20.821136   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:18:20.821147   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:18:20.834738   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:18:20.834751   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:18:20.839682   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:18:20.839693   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:18:20.855070   12967 logs.go:123] Gathering logs for coredns [2eee8f96b914] ...
	I1025 16:18:20.855087   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2eee8f96b914"
	I1025 16:18:20.871896   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:18:20.871909   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:18:20.884123   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:18:20.884135   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:18:20.903427   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:18:20.903436   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:18:20.916607   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:18:20.916619   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:18:20.942333   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:18:20.942346   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:18:20.958028   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:18:20.958038   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:18:23.472679   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:18:28.474894   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:18:28.475034   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:18:28.487136   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:18:28.487226   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:18:28.497960   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:18:28.498031   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:18:28.510692   12967 logs.go:282] 4 containers: [7f00f3bb70a3 2eee8f96b914 24408302c429 e4a8eaea1752]
	I1025 16:18:28.510764   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:18:28.521635   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:18:28.521711   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:18:28.532610   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:18:28.532690   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:18:28.542928   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:18:28.543006   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:18:28.556627   12967 logs.go:282] 0 containers: []
	W1025 16:18:28.556637   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:18:28.556693   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:18:28.567933   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:18:28.567952   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:18:28.567958   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:18:28.583875   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:18:28.583886   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:18:28.596475   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:18:28.596488   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:18:28.609457   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:18:28.609470   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:18:28.646771   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:18:28.646785   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:18:28.659439   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:18:28.659450   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:18:28.664312   12967 logs.go:123] Gathering logs for coredns [7f00f3bb70a3] ...
	I1025 16:18:28.664320   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f00f3bb70a3"
	I1025 16:18:28.677645   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:18:28.677654   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:18:28.693825   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:18:28.693834   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:18:28.713718   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:18:28.713735   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:18:28.739624   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:18:28.739634   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:18:28.751968   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:18:28.751984   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:18:28.791313   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:18:28.791326   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:18:28.813986   12967 logs.go:123] Gathering logs for coredns [2eee8f96b914] ...
	I1025 16:18:28.814000   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2eee8f96b914"
	I1025 16:18:28.829975   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:18:28.829987   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:18:31.344490   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:18:36.346883   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:18:36.347179   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:18:36.369824   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:18:36.369963   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:18:36.386233   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:18:36.386331   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:18:36.402429   12967 logs.go:282] 4 containers: [7f00f3bb70a3 2eee8f96b914 24408302c429 e4a8eaea1752]
	I1025 16:18:36.402518   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:18:36.413622   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:18:36.413703   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:18:36.423562   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:18:36.423640   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:18:36.434091   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:18:36.434167   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:18:36.444982   12967 logs.go:282] 0 containers: []
	W1025 16:18:36.444993   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:18:36.445058   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:18:36.455604   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:18:36.455620   12967 logs.go:123] Gathering logs for coredns [7f00f3bb70a3] ...
	I1025 16:18:36.455626   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f00f3bb70a3"
	I1025 16:18:36.468195   12967 logs.go:123] Gathering logs for coredns [2eee8f96b914] ...
	I1025 16:18:36.468204   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2eee8f96b914"
	I1025 16:18:36.480807   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:18:36.480817   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:18:36.498750   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:18:36.498763   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:18:36.540680   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:18:36.540694   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:18:36.553536   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:18:36.553544   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:18:36.578995   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:18:36.579009   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:18:36.614966   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:18:36.614978   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:18:36.619912   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:18:36.619921   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:18:36.632065   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:18:36.632076   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:18:36.644951   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:18:36.644963   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:18:36.658773   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:18:36.658786   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:18:36.675340   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:18:36.675353   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:18:36.689849   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:18:36.689865   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:18:36.702247   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:18:36.702259   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:18:39.220550   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:18:44.223166   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:18:44.223442   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:18:44.246807   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:18:44.246947   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:18:44.263136   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:18:44.263238   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:18:44.275866   12967 logs.go:282] 4 containers: [7f00f3bb70a3 2eee8f96b914 24408302c429 e4a8eaea1752]
	I1025 16:18:44.275953   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:18:44.287731   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:18:44.287816   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:18:44.298121   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:18:44.298200   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:18:44.308735   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:18:44.308814   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:18:44.318715   12967 logs.go:282] 0 containers: []
	W1025 16:18:44.318727   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:18:44.318786   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:18:44.329408   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:18:44.329424   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:18:44.329430   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:18:44.341002   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:18:44.341010   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:18:44.346166   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:18:44.346179   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:18:44.361619   12967 logs.go:123] Gathering logs for coredns [2eee8f96b914] ...
	I1025 16:18:44.361631   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2eee8f96b914"
	I1025 16:18:44.374948   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:18:44.374961   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:18:44.388724   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:18:44.388733   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:18:44.404373   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:18:44.404387   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:18:44.423627   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:18:44.423637   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:18:44.461754   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:18:44.461770   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:18:44.476810   12967 logs.go:123] Gathering logs for coredns [7f00f3bb70a3] ...
	I1025 16:18:44.476820   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f00f3bb70a3"
	I1025 16:18:44.490067   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:18:44.490079   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:18:44.516769   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:18:44.516784   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:18:44.531674   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:18:44.531687   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:18:44.570884   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:18:44.570897   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:18:44.584598   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:18:44.584609   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:18:47.100214   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:18:52.102810   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:18:52.103285   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:18:52.143043   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:18:52.143186   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:18:52.171193   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:18:52.171298   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:18:52.184324   12967 logs.go:282] 4 containers: [7f00f3bb70a3 2eee8f96b914 24408302c429 e4a8eaea1752]
	I1025 16:18:52.184400   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:18:52.195195   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:18:52.195282   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:18:52.206102   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:18:52.206185   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:18:52.217368   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:18:52.217453   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:18:52.228587   12967 logs.go:282] 0 containers: []
	W1025 16:18:52.228597   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:18:52.228665   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:18:52.240417   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:18:52.240435   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:18:52.240440   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:18:52.253236   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:18:52.253247   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:18:52.266653   12967 logs.go:123] Gathering logs for coredns [7f00f3bb70a3] ...
	I1025 16:18:52.266663   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f00f3bb70a3"
	I1025 16:18:52.280083   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:18:52.280092   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:18:52.295407   12967 logs.go:123] Gathering logs for coredns [2eee8f96b914] ...
	I1025 16:18:52.295419   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2eee8f96b914"
	I1025 16:18:52.308882   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:18:52.308895   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:18:52.325487   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:18:52.325504   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:18:52.351784   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:18:52.351795   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:18:52.364201   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:18:52.364214   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:18:52.403326   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:18:52.403338   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:18:52.419037   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:18:52.419051   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:18:52.424348   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:18:52.424363   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:18:52.436970   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:18:52.436982   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:18:52.462968   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:18:52.462979   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:18:52.475030   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:18:52.475042   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:18:55.014530   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:19:00.016247   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:19:00.016834   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:19:00.053915   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:19:00.054078   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:19:00.075514   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:19:00.075626   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:19:00.090162   12967 logs.go:282] 4 containers: [7f00f3bb70a3 2eee8f96b914 24408302c429 e4a8eaea1752]
	I1025 16:19:00.090255   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:19:00.105396   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:19:00.105480   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:19:00.123186   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:19:00.123272   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:19:00.139998   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:19:00.140085   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:19:00.151063   12967 logs.go:282] 0 containers: []
	W1025 16:19:00.151080   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:19:00.151151   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:19:00.162372   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:19:00.162392   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:19:00.162398   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:19:00.167217   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:19:00.167226   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:19:00.180196   12967 logs.go:123] Gathering logs for coredns [7f00f3bb70a3] ...
	I1025 16:19:00.180208   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f00f3bb70a3"
	I1025 16:19:00.192634   12967 logs.go:123] Gathering logs for coredns [2eee8f96b914] ...
	I1025 16:19:00.192647   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2eee8f96b914"
	I1025 16:19:00.205846   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:19:00.205858   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:19:00.217933   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:19:00.217946   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:19:00.233689   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:19:00.233701   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:19:00.251729   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:19:00.251743   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:19:00.278110   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:19:00.278132   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:19:00.316593   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:19:00.316610   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:19:00.332113   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:19:00.332127   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:19:00.347898   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:19:00.347913   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:19:00.363230   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:19:00.363244   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:19:00.376447   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:19:00.376459   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:19:00.414356   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:19:00.414375   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:19:02.929360   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:19:07.931541   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:19:07.931655   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:19:07.943939   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:19:07.944026   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:19:07.959163   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:19:07.959242   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:19:07.970426   12967 logs.go:282] 4 containers: [7f00f3bb70a3 2eee8f96b914 24408302c429 e4a8eaea1752]
	I1025 16:19:07.970512   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:19:07.981156   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:19:07.981241   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:19:07.995485   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:19:07.995562   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:19:08.006748   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:19:08.006828   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:19:08.017510   12967 logs.go:282] 0 containers: []
	W1025 16:19:08.017524   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:19:08.017598   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:19:08.028448   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:19:08.028467   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:19:08.028473   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:19:08.033209   12967 logs.go:123] Gathering logs for coredns [7f00f3bb70a3] ...
	I1025 16:19:08.033216   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f00f3bb70a3"
	I1025 16:19:08.045965   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:19:08.045977   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:19:08.059152   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:19:08.059168   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:19:08.075716   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:19:08.075730   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:19:08.096200   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:19:08.096214   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:19:08.135309   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:19:08.135332   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:19:08.158223   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:19:08.158237   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:19:08.172333   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:19:08.172346   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:19:08.184971   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:19:08.184981   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:19:08.197051   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:19:08.197064   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:19:08.222915   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:19:08.222941   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:19:08.237725   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:19:08.237736   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:19:08.273884   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:19:08.273899   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:19:08.291075   12967 logs.go:123] Gathering logs for coredns [2eee8f96b914] ...
	I1025 16:19:08.291087   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2eee8f96b914"
	I1025 16:19:10.806762   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:19:15.807321   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:19:15.807458   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:19:15.819326   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:19:15.819414   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:19:15.830178   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:19:15.830272   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:19:15.841190   12967 logs.go:282] 4 containers: [7f00f3bb70a3 2eee8f96b914 24408302c429 e4a8eaea1752]
	I1025 16:19:15.841269   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:19:15.851745   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:19:15.851825   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:19:15.861934   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:19:15.862013   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:19:15.872836   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:19:15.872915   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:19:15.883163   12967 logs.go:282] 0 containers: []
	W1025 16:19:15.883174   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:19:15.883243   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:19:15.894081   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:19:15.894101   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:19:15.894108   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:19:15.905931   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:19:15.905944   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:19:15.910385   12967 logs.go:123] Gathering logs for coredns [7f00f3bb70a3] ...
	I1025 16:19:15.910395   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f00f3bb70a3"
	I1025 16:19:15.921982   12967 logs.go:123] Gathering logs for coredns [2eee8f96b914] ...
	I1025 16:19:15.921992   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2eee8f96b914"
	I1025 16:19:15.933893   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:19:15.933903   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:19:15.958488   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:19:15.958497   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:19:15.977661   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:19:15.977671   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:19:15.989228   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:19:15.989239   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:19:16.024656   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:19:16.024667   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:19:16.039715   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:19:16.039732   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:19:16.051490   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:19:16.051499   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:19:16.063346   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:19:16.063361   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:19:16.098004   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:19:16.098015   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:19:16.111984   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:19:16.111994   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:19:16.128324   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:19:16.128336   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:19:18.641981   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:19:23.644132   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:19:23.644298   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:19:23.659481   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:19:23.659581   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:19:23.671385   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:19:23.671472   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:19:23.682476   12967 logs.go:282] 4 containers: [aa15ca65191d 887b6293ef77 7f00f3bb70a3 2eee8f96b914]
	I1025 16:19:23.682551   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:19:23.696641   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:19:23.696730   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:19:23.707220   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:19:23.707293   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:19:23.717593   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:19:23.717672   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:19:23.728393   12967 logs.go:282] 0 containers: []
	W1025 16:19:23.728406   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:19:23.728478   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:19:23.739365   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:19:23.739383   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:19:23.739397   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:19:23.754700   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:19:23.754710   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:19:23.767175   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:19:23.767188   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:19:23.784343   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:19:23.784355   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:19:23.796618   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:19:23.796633   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:19:23.830964   12967 logs.go:123] Gathering logs for coredns [887b6293ef77] ...
	I1025 16:19:23.830972   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 887b6293ef77"
	I1025 16:19:23.845446   12967 logs.go:123] Gathering logs for coredns [7f00f3bb70a3] ...
	I1025 16:19:23.845459   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f00f3bb70a3"
	I1025 16:19:23.857200   12967 logs.go:123] Gathering logs for coredns [2eee8f96b914] ...
	I1025 16:19:23.857211   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2eee8f96b914"
	I1025 16:19:23.871758   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:19:23.871771   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:19:23.886563   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:19:23.886575   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:19:23.900047   12967 logs.go:123] Gathering logs for coredns [aa15ca65191d] ...
	I1025 16:19:23.900056   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa15ca65191d"
	I1025 16:19:23.915685   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:19:23.915698   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:19:23.927563   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:19:23.927574   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:19:23.931870   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:19:23.931878   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:19:23.966949   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:19:23.966960   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:19:26.492930   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:19:31.495185   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:19:31.499640   12967 out.go:201] 
	W1025 16:19:31.502570   12967 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1025 16:19:31.502577   12967 out.go:270] * 
	* 
	W1025 16:19:31.503120   12967 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 16:19:31.514485   12967 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-023000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-10-25 16:19:31.605207 -0700 PDT m=+1294.865890459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-023000 -n running-upgrade-023000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-023000 -n running-upgrade-023000: exit status 2 (15.615269792s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-023000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-958000          | force-systemd-flag-958000 | jenkins | v1.34.0 | 25 Oct 24 16:09 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-462000              | force-systemd-env-462000  | jenkins | v1.34.0 | 25 Oct 24 16:09 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-462000           | force-systemd-env-462000  | jenkins | v1.34.0 | 25 Oct 24 16:09 PDT | 25 Oct 24 16:09 PDT |
	| start   | -p docker-flags-171000                | docker-flags-171000       | jenkins | v1.34.0 | 25 Oct 24 16:09 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-958000             | force-systemd-flag-958000 | jenkins | v1.34.0 | 25 Oct 24 16:09 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-958000          | force-systemd-flag-958000 | jenkins | v1.34.0 | 25 Oct 24 16:09 PDT | 25 Oct 24 16:09 PDT |
	| start   | -p cert-expiration-057000             | cert-expiration-057000    | jenkins | v1.34.0 | 25 Oct 24 16:09 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-171000 ssh               | docker-flags-171000       | jenkins | v1.34.0 | 25 Oct 24 16:10 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-171000 ssh               | docker-flags-171000       | jenkins | v1.34.0 | 25 Oct 24 16:10 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-171000                | docker-flags-171000       | jenkins | v1.34.0 | 25 Oct 24 16:10 PDT | 25 Oct 24 16:10 PDT |
	| start   | -p cert-options-507000                | cert-options-507000       | jenkins | v1.34.0 | 25 Oct 24 16:10 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-507000 ssh               | cert-options-507000       | jenkins | v1.34.0 | 25 Oct 24 16:10 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-507000 -- sudo        | cert-options-507000       | jenkins | v1.34.0 | 25 Oct 24 16:10 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-507000                | cert-options-507000       | jenkins | v1.34.0 | 25 Oct 24 16:10 PDT | 25 Oct 24 16:10 PDT |
	| start   | -p running-upgrade-023000             | minikube                  | jenkins | v1.26.0 | 25 Oct 24 16:10 PDT | 25 Oct 24 16:11 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-023000             | running-upgrade-023000    | jenkins | v1.34.0 | 25 Oct 24 16:11 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-057000             | cert-expiration-057000    | jenkins | v1.34.0 | 25 Oct 24 16:13 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-057000             | cert-expiration-057000    | jenkins | v1.34.0 | 25 Oct 24 16:13 PDT | 25 Oct 24 16:13 PDT |
	| start   | -p kubernetes-upgrade-410000          | kubernetes-upgrade-410000 | jenkins | v1.34.0 | 25 Oct 24 16:13 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-410000          | kubernetes-upgrade-410000 | jenkins | v1.34.0 | 25 Oct 24 16:13 PDT | 25 Oct 24 16:13 PDT |
	| start   | -p kubernetes-upgrade-410000          | kubernetes-upgrade-410000 | jenkins | v1.34.0 | 25 Oct 24 16:13 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-410000          | kubernetes-upgrade-410000 | jenkins | v1.34.0 | 25 Oct 24 16:13 PDT | 25 Oct 24 16:13 PDT |
	| start   | -p stopped-upgrade-782000             | minikube                  | jenkins | v1.26.0 | 25 Oct 24 16:13 PDT | 25 Oct 24 16:14 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-782000 stop           | minikube                  | jenkins | v1.26.0 | 25 Oct 24 16:14 PDT | 25 Oct 24 16:14 PDT |
	| start   | -p stopped-upgrade-782000             | stopped-upgrade-782000    | jenkins | v1.34.0 | 25 Oct 24 16:14 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/25 16:14:28
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 16:14:28.282945   13110 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:14:28.283124   13110 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:14:28.283128   13110 out.go:358] Setting ErrFile to fd 2...
	I1025 16:14:28.283131   13110 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:14:28.283271   13110 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:14:28.284560   13110 out.go:352] Setting JSON to false
	I1025 16:14:28.304950   13110 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7306,"bootTime":1729890762,"procs":516,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 16:14:28.305042   13110 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 16:14:28.309767   13110 out.go:177] * [stopped-upgrade-782000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 16:14:28.318838   13110 notify.go:220] Checking for updates...
	I1025 16:14:28.322674   13110 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 16:14:28.325708   13110 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:14:28.328744   13110 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 16:14:28.331679   13110 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 16:14:27.183239   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:14:27.183903   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:14:27.222221   12967 logs.go:282] 2 containers: [051c2bdfeab6 c0de5082f75b]
	I1025 16:14:27.222389   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:14:27.244351   12967 logs.go:282] 2 containers: [256aa04c6fe0 cf10a1d31713]
	I1025 16:14:27.244481   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:14:27.260130   12967 logs.go:282] 1 containers: [f1c00fb6a691]
	I1025 16:14:27.260223   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:14:27.272504   12967 logs.go:282] 2 containers: [3324ee378a14 8b920158411f]
	I1025 16:14:27.272588   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:14:27.284723   12967 logs.go:282] 1 containers: [6126e2846c92]
	I1025 16:14:27.284804   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:14:27.300873   12967 logs.go:282] 2 containers: [23bd841a0a0f f7d087d3ed95]
	I1025 16:14:27.300957   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:14:27.316645   12967 logs.go:282] 0 containers: []
	W1025 16:14:27.316657   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:14:27.316736   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:14:27.328656   12967 logs.go:282] 2 containers: [4793573f620f d638f46e9df9]
	I1025 16:14:27.328678   12967 logs.go:123] Gathering logs for storage-provisioner [d638f46e9df9] ...
	I1025 16:14:27.328683   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d638f46e9df9"
	I1025 16:14:27.341535   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:14:27.341549   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:14:27.366043   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:14:27.366063   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:14:27.370888   12967 logs.go:123] Gathering logs for kube-apiserver [051c2bdfeab6] ...
	I1025 16:14:27.370896   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 051c2bdfeab6"
	I1025 16:14:27.387378   12967 logs.go:123] Gathering logs for etcd [cf10a1d31713] ...
	I1025 16:14:27.387390   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10a1d31713"
	I1025 16:14:27.400604   12967 logs.go:123] Gathering logs for kube-scheduler [3324ee378a14] ...
	I1025 16:14:27.400614   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3324ee378a14"
	I1025 16:14:27.418637   12967 logs.go:123] Gathering logs for storage-provisioner [4793573f620f] ...
	I1025 16:14:27.418648   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4793573f620f"
	I1025 16:14:27.431084   12967 logs.go:123] Gathering logs for etcd [256aa04c6fe0] ...
	I1025 16:14:27.431094   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 256aa04c6fe0"
	I1025 16:14:27.445226   12967 logs.go:123] Gathering logs for coredns [f1c00fb6a691] ...
	I1025 16:14:27.445236   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c00fb6a691"
	I1025 16:14:27.457123   12967 logs.go:123] Gathering logs for kube-proxy [6126e2846c92] ...
	I1025 16:14:27.457134   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6126e2846c92"
	I1025 16:14:27.468704   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:14:27.468714   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:14:27.512644   12967 logs.go:123] Gathering logs for kube-scheduler [8b920158411f] ...
	I1025 16:14:27.512660   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b920158411f"
	I1025 16:14:27.524198   12967 logs.go:123] Gathering logs for kube-controller-manager [23bd841a0a0f] ...
	I1025 16:14:27.524212   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bd841a0a0f"
	I1025 16:14:27.543135   12967 logs.go:123] Gathering logs for kube-controller-manager [f7d087d3ed95] ...
	I1025 16:14:27.543147   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7d087d3ed95"
	I1025 16:14:27.560734   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:14:27.560757   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:14:27.573781   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:14:27.573796   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:14:27.613259   12967 logs.go:123] Gathering logs for kube-apiserver [c0de5082f75b] ...
	I1025 16:14:27.613273   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0de5082f75b"
	I1025 16:14:28.338771   13110 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	I1025 16:14:28.342675   13110 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 16:14:28.346935   13110 config.go:182] Loaded profile config "stopped-upgrade-782000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 16:14:28.350531   13110 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1025 16:14:28.353794   13110 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 16:14:28.356716   13110 out.go:177] * Using the qemu2 driver based on existing profile
	I1025 16:14:28.364707   13110 start.go:297] selected driver: qemu2
	I1025 16:14:28.364713   13110 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-782000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:62397 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1025 16:14:28.364768   13110 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 16:14:28.367437   13110 cni.go:84] Creating CNI manager for ""
	I1025 16:14:28.367465   13110 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 16:14:28.367487   13110 start.go:340] cluster config:
	{Name:stopped-upgrade-782000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:62397 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1025 16:14:28.367542   13110 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:14:28.375669   13110 out.go:177] * Starting "stopped-upgrade-782000" primary control-plane node in "stopped-upgrade-782000" cluster
	I1025 16:14:28.379721   13110 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1025 16:14:28.379741   13110 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1025 16:14:28.379750   13110 cache.go:56] Caching tarball of preloaded images
	I1025 16:14:28.379828   13110 preload.go:172] Found /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 16:14:28.379838   13110 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1025 16:14:28.379878   13110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/config.json ...
	I1025 16:14:28.380295   13110 start.go:360] acquireMachinesLock for stopped-upgrade-782000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:14:28.380339   13110 start.go:364] duration metric: took 37.958µs to acquireMachinesLock for "stopped-upgrade-782000"
	I1025 16:14:28.380346   13110 start.go:96] Skipping create...Using existing machine configuration
	I1025 16:14:28.380351   13110 fix.go:54] fixHost starting: 
	I1025 16:14:28.380448   13110 fix.go:112] recreateIfNeeded on stopped-upgrade-782000: state=Stopped err=<nil>
	W1025 16:14:28.380456   13110 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 16:14:28.384693   13110 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-782000" ...
	I1025 16:14:28.392645   13110 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:14:28.392714   13110 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/stopped-upgrade-782000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/stopped-upgrade-782000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/stopped-upgrade-782000/qemu.pid -nic user,model=virtio,hostfwd=tcp::62363-:22,hostfwd=tcp::62364-:2376,hostname=stopped-upgrade-782000 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/stopped-upgrade-782000/disk.qcow2
	I1025 16:14:28.439013   13110 main.go:141] libmachine: STDOUT: 
	I1025 16:14:28.439042   13110 main.go:141] libmachine: STDERR: 
	I1025 16:14:28.439048   13110 main.go:141] libmachine: Waiting for VM to start (ssh -p 62363 docker@127.0.0.1)...
	I1025 16:14:30.140669   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:14:35.143530   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:14:35.143755   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:14:35.161619   12967 logs.go:282] 2 containers: [051c2bdfeab6 c0de5082f75b]
	I1025 16:14:35.161709   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:14:35.172782   12967 logs.go:282] 2 containers: [256aa04c6fe0 cf10a1d31713]
	I1025 16:14:35.172870   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:14:35.184013   12967 logs.go:282] 1 containers: [f1c00fb6a691]
	I1025 16:14:35.184095   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:14:35.195191   12967 logs.go:282] 2 containers: [3324ee378a14 8b920158411f]
	I1025 16:14:35.195268   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:14:35.206137   12967 logs.go:282] 1 containers: [6126e2846c92]
	I1025 16:14:35.206221   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:14:35.217263   12967 logs.go:282] 2 containers: [23bd841a0a0f f7d087d3ed95]
	I1025 16:14:35.217342   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:14:35.228763   12967 logs.go:282] 0 containers: []
	W1025 16:14:35.228776   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:14:35.228848   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:14:35.239352   12967 logs.go:282] 2 containers: [4793573f620f d638f46e9df9]
	I1025 16:14:35.239369   12967 logs.go:123] Gathering logs for kube-controller-manager [23bd841a0a0f] ...
	I1025 16:14:35.239375   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bd841a0a0f"
	I1025 16:14:35.257561   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:14:35.257574   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:14:35.281118   12967 logs.go:123] Gathering logs for kube-proxy [6126e2846c92] ...
	I1025 16:14:35.281125   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6126e2846c92"
	I1025 16:14:35.297011   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:14:35.297022   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:14:35.338793   12967 logs.go:123] Gathering logs for etcd [256aa04c6fe0] ...
	I1025 16:14:35.338803   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 256aa04c6fe0"
	I1025 16:14:35.353150   12967 logs.go:123] Gathering logs for etcd [cf10a1d31713] ...
	I1025 16:14:35.353163   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10a1d31713"
	I1025 16:14:35.366419   12967 logs.go:123] Gathering logs for coredns [f1c00fb6a691] ...
	I1025 16:14:35.366431   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c00fb6a691"
	I1025 16:14:35.377801   12967 logs.go:123] Gathering logs for kube-scheduler [8b920158411f] ...
	I1025 16:14:35.377812   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b920158411f"
	I1025 16:14:35.390212   12967 logs.go:123] Gathering logs for kube-apiserver [051c2bdfeab6] ...
	I1025 16:14:35.390223   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 051c2bdfeab6"
	I1025 16:14:35.404810   12967 logs.go:123] Gathering logs for kube-apiserver [c0de5082f75b] ...
	I1025 16:14:35.404822   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0de5082f75b"
	I1025 16:14:35.430264   12967 logs.go:123] Gathering logs for kube-scheduler [3324ee378a14] ...
	I1025 16:14:35.430274   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3324ee378a14"
	I1025 16:14:35.442580   12967 logs.go:123] Gathering logs for storage-provisioner [d638f46e9df9] ...
	I1025 16:14:35.442590   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d638f46e9df9"
	I1025 16:14:35.454273   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:14:35.454283   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:14:35.458921   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:14:35.458928   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:14:35.494171   12967 logs.go:123] Gathering logs for kube-controller-manager [f7d087d3ed95] ...
	I1025 16:14:35.494182   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7d087d3ed95"
	I1025 16:14:35.510308   12967 logs.go:123] Gathering logs for storage-provisioner [4793573f620f] ...
	I1025 16:14:35.510318   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4793573f620f"
	I1025 16:14:35.522646   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:14:35.522660   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:14:38.036619   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:14:43.038873   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:14:43.039079   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:14:43.053891   12967 logs.go:282] 2 containers: [051c2bdfeab6 c0de5082f75b]
	I1025 16:14:43.053978   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:14:43.065473   12967 logs.go:282] 2 containers: [256aa04c6fe0 cf10a1d31713]
	I1025 16:14:43.065558   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:14:43.076263   12967 logs.go:282] 1 containers: [f1c00fb6a691]
	I1025 16:14:43.076339   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:14:43.086506   12967 logs.go:282] 2 containers: [3324ee378a14 8b920158411f]
	I1025 16:14:43.086593   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:14:43.097455   12967 logs.go:282] 1 containers: [6126e2846c92]
	I1025 16:14:43.097527   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:14:43.109042   12967 logs.go:282] 2 containers: [23bd841a0a0f f7d087d3ed95]
	I1025 16:14:43.109127   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:14:43.119286   12967 logs.go:282] 0 containers: []
	W1025 16:14:43.119298   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:14:43.119366   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:14:43.135891   12967 logs.go:282] 2 containers: [4793573f620f d638f46e9df9]
	I1025 16:14:43.135908   12967 logs.go:123] Gathering logs for storage-provisioner [d638f46e9df9] ...
	I1025 16:14:43.135912   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d638f46e9df9"
	I1025 16:14:43.147047   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:14:43.147058   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:14:43.186612   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:14:43.186622   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:14:43.190960   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:14:43.190969   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:14:43.224986   12967 logs.go:123] Gathering logs for kube-apiserver [051c2bdfeab6] ...
	I1025 16:14:43.224999   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 051c2bdfeab6"
	I1025 16:14:43.243356   12967 logs.go:123] Gathering logs for kube-apiserver [c0de5082f75b] ...
	I1025 16:14:43.243365   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0de5082f75b"
	I1025 16:14:43.267645   12967 logs.go:123] Gathering logs for kube-proxy [6126e2846c92] ...
	I1025 16:14:43.267654   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6126e2846c92"
	I1025 16:14:43.278968   12967 logs.go:123] Gathering logs for kube-controller-manager [23bd841a0a0f] ...
	I1025 16:14:43.278977   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bd841a0a0f"
	I1025 16:14:43.298713   12967 logs.go:123] Gathering logs for etcd [cf10a1d31713] ...
	I1025 16:14:43.298726   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10a1d31713"
	I1025 16:14:43.318607   12967 logs.go:123] Gathering logs for coredns [f1c00fb6a691] ...
	I1025 16:14:43.318616   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c00fb6a691"
	I1025 16:14:43.329781   12967 logs.go:123] Gathering logs for storage-provisioner [4793573f620f] ...
	I1025 16:14:43.329789   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4793573f620f"
	I1025 16:14:43.344297   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:14:43.344308   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:14:43.355876   12967 logs.go:123] Gathering logs for etcd [256aa04c6fe0] ...
	I1025 16:14:43.355889   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 256aa04c6fe0"
	I1025 16:14:43.373691   12967 logs.go:123] Gathering logs for kube-scheduler [8b920158411f] ...
	I1025 16:14:43.373702   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b920158411f"
	I1025 16:14:43.385368   12967 logs.go:123] Gathering logs for kube-controller-manager [f7d087d3ed95] ...
	I1025 16:14:43.385378   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7d087d3ed95"
	I1025 16:14:43.400378   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:14:43.400390   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:14:43.425512   12967 logs.go:123] Gathering logs for kube-scheduler [3324ee378a14] ...
	I1025 16:14:43.425526   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3324ee378a14"
	I1025 16:14:45.943503   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:14:48.332756   13110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/config.json ...
	I1025 16:14:48.333776   13110 machine.go:93] provisionDockerMachine start ...
	I1025 16:14:48.334213   13110 main.go:141] libmachine: Using SSH client type: native
	I1025 16:14:48.334633   13110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105026480] 0x105028cc0 <nil>  [] 0s} localhost 62363 <nil> <nil>}
	I1025 16:14:48.334647   13110 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 16:14:48.423239   13110 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1025 16:14:48.423262   13110 buildroot.go:166] provisioning hostname "stopped-upgrade-782000"
	I1025 16:14:48.423362   13110 main.go:141] libmachine: Using SSH client type: native
	I1025 16:14:48.423546   13110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105026480] 0x105028cc0 <nil>  [] 0s} localhost 62363 <nil> <nil>}
	I1025 16:14:48.423558   13110 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-782000 && echo "stopped-upgrade-782000" | sudo tee /etc/hostname
	I1025 16:14:48.503656   13110 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-782000
	
	I1025 16:14:48.503732   13110 main.go:141] libmachine: Using SSH client type: native
	I1025 16:14:48.503863   13110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105026480] 0x105028cc0 <nil>  [] 0s} localhost 62363 <nil> <nil>}
	I1025 16:14:48.503876   13110 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-782000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-782000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-782000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 16:14:48.576306   13110 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 16:14:48.576319   13110 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19758-10490/.minikube CaCertPath:/Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19758-10490/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19758-10490/.minikube}
	I1025 16:14:48.576330   13110 buildroot.go:174] setting up certificates
	I1025 16:14:48.576335   13110 provision.go:84] configureAuth start
	I1025 16:14:48.576343   13110 provision.go:143] copyHostCerts
	I1025 16:14:48.576412   13110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19758-10490/.minikube/ca.pem, removing ...
	I1025 16:14:48.576418   13110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19758-10490/.minikube/ca.pem
	I1025 16:14:48.576533   13110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19758-10490/.minikube/ca.pem (1078 bytes)
	I1025 16:14:48.576740   13110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19758-10490/.minikube/cert.pem, removing ...
	I1025 16:14:48.576746   13110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19758-10490/.minikube/cert.pem
	I1025 16:14:48.576797   13110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19758-10490/.minikube/cert.pem (1123 bytes)
	I1025 16:14:48.576931   13110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19758-10490/.minikube/key.pem, removing ...
	I1025 16:14:48.576935   13110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19758-10490/.minikube/key.pem
	I1025 16:14:48.576977   13110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19758-10490/.minikube/key.pem (1675 bytes)
	I1025 16:14:48.577088   13110 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-782000 san=[127.0.0.1 localhost minikube stopped-upgrade-782000]
	I1025 16:14:48.667891   13110 provision.go:177] copyRemoteCerts
	I1025 16:14:48.667939   13110 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 16:14:48.667946   13110 sshutil.go:53] new ssh client: &{IP:localhost Port:62363 SSHKeyPath:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/stopped-upgrade-782000/id_rsa Username:docker}
	I1025 16:14:48.704330   13110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 16:14:48.711701   13110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1025 16:14:48.718943   13110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 16:14:48.725860   13110 provision.go:87] duration metric: took 149.518083ms to configureAuth
	I1025 16:14:48.725870   13110 buildroot.go:189] setting minikube options for container-runtime
	I1025 16:14:48.725987   13110 config.go:182] Loaded profile config "stopped-upgrade-782000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 16:14:48.726038   13110 main.go:141] libmachine: Using SSH client type: native
	I1025 16:14:48.726124   13110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105026480] 0x105028cc0 <nil>  [] 0s} localhost 62363 <nil> <nil>}
	I1025 16:14:48.726129   13110 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 16:14:48.795326   13110 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1025 16:14:48.795335   13110 buildroot.go:70] root file system type: tmpfs
	I1025 16:14:48.795389   13110 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 16:14:48.795448   13110 main.go:141] libmachine: Using SSH client type: native
	I1025 16:14:48.795556   13110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105026480] 0x105028cc0 <nil>  [] 0s} localhost 62363 <nil> <nil>}
	I1025 16:14:48.795592   13110 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 16:14:48.867380   13110 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 16:14:48.867447   13110 main.go:141] libmachine: Using SSH client type: native
	I1025 16:14:48.867560   13110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105026480] 0x105028cc0 <nil>  [] 0s} localhost 62363 <nil> <nil>}
	I1025 16:14:48.867569   13110 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 16:14:49.264586   13110 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1025 16:14:49.264599   13110 machine.go:96] duration metric: took 930.818541ms to provisionDockerMachine
	I1025 16:14:49.264606   13110 start.go:293] postStartSetup for "stopped-upgrade-782000" (driver="qemu2")
	I1025 16:14:49.264613   13110 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 16:14:49.264686   13110 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 16:14:49.264696   13110 sshutil.go:53] new ssh client: &{IP:localhost Port:62363 SSHKeyPath:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/stopped-upgrade-782000/id_rsa Username:docker}
	I1025 16:14:49.302669   13110 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 16:14:49.304287   13110 info.go:137] Remote host: Buildroot 2021.02.12
	I1025 16:14:49.304296   13110 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19758-10490/.minikube/addons for local assets ...
	I1025 16:14:49.304381   13110 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19758-10490/.minikube/files for local assets ...
	I1025 16:14:49.304476   13110 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19758-10490/.minikube/files/etc/ssl/certs/109982.pem -> 109982.pem in /etc/ssl/certs
	I1025 16:14:49.304584   13110 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 16:14:49.308409   13110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/files/etc/ssl/certs/109982.pem --> /etc/ssl/certs/109982.pem (1708 bytes)
	I1025 16:14:49.316278   13110 start.go:296] duration metric: took 51.664583ms for postStartSetup
	I1025 16:14:49.316298   13110 fix.go:56] duration metric: took 20.936093709s for fixHost
	I1025 16:14:49.316365   13110 main.go:141] libmachine: Using SSH client type: native
	I1025 16:14:49.316485   13110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105026480] 0x105028cc0 <nil>  [] 0s} localhost 62363 <nil> <nil>}
	I1025 16:14:49.316491   13110 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1025 16:14:49.385972   13110 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729898089.131940504
	
	I1025 16:14:49.385980   13110 fix.go:216] guest clock: 1729898089.131940504
	I1025 16:14:49.385988   13110 fix.go:229] Guest: 2024-10-25 16:14:49.131940504 -0700 PDT Remote: 2024-10-25 16:14:49.3163 -0700 PDT m=+21.067462959 (delta=-184.359496ms)
	I1025 16:14:49.386002   13110 fix.go:200] guest clock delta is within tolerance: -184.359496ms
	I1025 16:14:49.386004   13110 start.go:83] releasing machines lock for "stopped-upgrade-782000", held for 21.005807792s
	I1025 16:14:49.386073   13110 ssh_runner.go:195] Run: cat /version.json
	I1025 16:14:49.386076   13110 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 16:14:49.386081   13110 sshutil.go:53] new ssh client: &{IP:localhost Port:62363 SSHKeyPath:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/stopped-upgrade-782000/id_rsa Username:docker}
	I1025 16:14:49.386092   13110 sshutil.go:53] new ssh client: &{IP:localhost Port:62363 SSHKeyPath:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/stopped-upgrade-782000/id_rsa Username:docker}
	W1025 16:14:49.386666   13110 sshutil.go:64] dial failure (will retry): dial tcp [::1]:62363: connect: connection refused
	I1025 16:14:49.386695   13110 retry.go:31] will retry after 248.187236ms: dial tcp [::1]:62363: connect: connection refused
	W1025 16:14:49.678938   13110 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1025 16:14:49.679041   13110 ssh_runner.go:195] Run: systemctl --version
	I1025 16:14:49.681616   13110 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 16:14:49.683937   13110 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 16:14:49.683990   13110 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1025 16:14:49.688050   13110 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1025 16:14:49.693971   13110 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 16:14:49.693980   13110 start.go:495] detecting cgroup driver to use...
	I1025 16:14:49.694065   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 16:14:49.702045   13110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1025 16:14:49.705534   13110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1025 16:14:49.708527   13110 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1025 16:14:49.708559   13110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1025 16:14:49.711841   13110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 16:14:49.714997   13110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1025 16:14:49.717821   13110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 16:14:49.720645   13110 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 16:14:49.723867   13110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1025 16:14:49.727240   13110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1025 16:14:49.730185   13110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1025 16:14:49.733320   13110 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 16:14:49.736240   13110 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 16:14:49.739468   13110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 16:14:49.820226   13110 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1025 16:14:49.827268   13110 start.go:495] detecting cgroup driver to use...
	I1025 16:14:49.827363   13110 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1025 16:14:49.832743   13110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 16:14:49.837902   13110 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 16:14:49.843474   13110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 16:14:49.848334   13110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 16:14:49.853295   13110 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1025 16:14:49.911662   13110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 16:14:49.916997   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 16:14:49.922492   13110 ssh_runner.go:195] Run: which cri-dockerd
	I1025 16:14:49.923748   13110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1025 16:14:49.927003   13110 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1025 16:14:49.932203   13110 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1025 16:14:50.024679   13110 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1025 16:14:50.093727   13110 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1025 16:14:50.093800   13110 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1025 16:14:50.099127   13110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 16:14:50.163392   13110 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 16:14:51.292203   13110 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.128790292s)
	I1025 16:14:51.292324   13110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1025 16:14:51.297957   13110 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1025 16:14:51.304736   13110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1025 16:14:51.310473   13110 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1025 16:14:51.392629   13110 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 16:14:51.472134   13110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 16:14:51.546027   13110 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1025 16:14:51.551975   13110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1025 16:14:51.556979   13110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 16:14:51.641670   13110 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1025 16:14:51.680099   13110 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1025 16:14:51.680203   13110 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1025 16:14:51.683775   13110 start.go:563] Will wait 60s for crictl version
	I1025 16:14:51.683843   13110 ssh_runner.go:195] Run: which crictl
	I1025 16:14:51.685295   13110 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 16:14:51.700818   13110 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1025 16:14:51.700902   13110 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 16:14:51.718073   13110 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 16:14:51.736912   13110 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1025 16:14:51.737092   13110 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1025 16:14:51.738341   13110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 16:14:51.741788   13110 kubeadm.go:883] updating cluster {Name:stopped-upgrade-782000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:62397 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1025 16:14:51.741831   13110 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1025 16:14:51.741878   13110 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 16:14:51.752499   13110 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 16:14:51.752509   13110 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1025 16:14:51.752567   13110 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1025 16:14:51.756096   13110 ssh_runner.go:195] Run: which lz4
	I1025 16:14:51.757483   13110 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1025 16:14:51.758631   13110 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1025 16:14:51.758642   13110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1025 16:14:52.667389   13110 docker.go:653] duration metric: took 909.965333ms to copy over tarball
	I1025 16:14:52.667463   13110 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1025 16:14:50.946226   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:14:50.946746   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:14:50.985744   12967 logs.go:282] 2 containers: [051c2bdfeab6 c0de5082f75b]
	I1025 16:14:50.985922   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:14:51.009147   12967 logs.go:282] 2 containers: [256aa04c6fe0 cf10a1d31713]
	I1025 16:14:51.009274   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:14:51.023660   12967 logs.go:282] 1 containers: [f1c00fb6a691]
	I1025 16:14:51.023745   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:14:51.035992   12967 logs.go:282] 2 containers: [3324ee378a14 8b920158411f]
	I1025 16:14:51.036083   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:14:51.046749   12967 logs.go:282] 1 containers: [6126e2846c92]
	I1025 16:14:51.046826   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:14:51.057651   12967 logs.go:282] 2 containers: [23bd841a0a0f f7d087d3ed95]
	I1025 16:14:51.057729   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:14:51.068627   12967 logs.go:282] 0 containers: []
	W1025 16:14:51.068642   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:14:51.068712   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:14:51.084002   12967 logs.go:282] 2 containers: [4793573f620f d638f46e9df9]
	I1025 16:14:51.084020   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:14:51.084025   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:14:51.096531   12967 logs.go:123] Gathering logs for etcd [256aa04c6fe0] ...
	I1025 16:14:51.096548   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 256aa04c6fe0"
	I1025 16:14:51.111410   12967 logs.go:123] Gathering logs for kube-scheduler [8b920158411f] ...
	I1025 16:14:51.111420   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b920158411f"
	I1025 16:14:51.123165   12967 logs.go:123] Gathering logs for kube-proxy [6126e2846c92] ...
	I1025 16:14:51.123174   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6126e2846c92"
	I1025 16:14:51.135382   12967 logs.go:123] Gathering logs for kube-controller-manager [f7d087d3ed95] ...
	I1025 16:14:51.135395   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7d087d3ed95"
	I1025 16:14:51.147094   12967 logs.go:123] Gathering logs for storage-provisioner [4793573f620f] ...
	I1025 16:14:51.147108   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4793573f620f"
	I1025 16:14:51.158842   12967 logs.go:123] Gathering logs for kube-scheduler [3324ee378a14] ...
	I1025 16:14:51.158853   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3324ee378a14"
	I1025 16:14:51.171145   12967 logs.go:123] Gathering logs for storage-provisioner [d638f46e9df9] ...
	I1025 16:14:51.171156   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d638f46e9df9"
	I1025 16:14:51.183741   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:14:51.183753   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:14:51.207862   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:14:51.207880   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:14:51.250011   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:14:51.250031   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:14:51.254552   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:14:51.254563   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:14:51.293569   12967 logs.go:123] Gathering logs for kube-apiserver [051c2bdfeab6] ...
	I1025 16:14:51.293579   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 051c2bdfeab6"
	I1025 16:14:51.313877   12967 logs.go:123] Gathering logs for kube-apiserver [c0de5082f75b] ...
	I1025 16:14:51.313888   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0de5082f75b"
	I1025 16:14:51.339507   12967 logs.go:123] Gathering logs for coredns [f1c00fb6a691] ...
	I1025 16:14:51.339521   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c00fb6a691"
	I1025 16:14:51.351361   12967 logs.go:123] Gathering logs for etcd [cf10a1d31713] ...
	I1025 16:14:51.351374   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10a1d31713"
	I1025 16:14:51.365987   12967 logs.go:123] Gathering logs for kube-controller-manager [23bd841a0a0f] ...
	I1025 16:14:51.366001   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bd841a0a0f"
	I1025 16:14:53.853951   13110 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.186481041s)
	I1025 16:14:53.853965   13110 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1025 16:14:53.869795   13110 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1025 16:14:53.872676   13110 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1025 16:14:53.877869   13110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 16:14:53.959990   13110 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 16:14:55.460674   13110 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.50067875s)
	I1025 16:14:55.460790   13110 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 16:14:55.471403   13110 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 16:14:55.471416   13110 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1025 16:14:55.471422   13110 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1025 16:14:55.475532   13110 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 16:14:55.477500   13110 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1025 16:14:55.479950   13110 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1025 16:14:55.480299   13110 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 16:14:55.482027   13110 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1025 16:14:55.482027   13110 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1025 16:14:55.483355   13110 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1025 16:14:55.483540   13110 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1025 16:14:55.484766   13110 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1025 16:14:55.485364   13110 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1025 16:14:55.485853   13110 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1025 16:14:55.485943   13110 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1025 16:14:55.486904   13110 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1025 16:14:55.487364   13110 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1025 16:14:55.487945   13110 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1025 16:14:55.488756   13110 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1025 16:14:56.003910   13110 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1025 16:14:56.015271   13110 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1025 16:14:56.015307   13110 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1025 16:14:56.015369   13110 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1025 16:14:56.025576   13110 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1025 16:14:56.051092   13110 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	W1025 16:14:56.052880   13110 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1025 16:14:56.053309   13110 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1025 16:14:56.063177   13110 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1025 16:14:56.063199   13110 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1025 16:14:56.063266   13110 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1025 16:14:56.069819   13110 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1025 16:14:56.069840   13110 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1025 16:14:56.069894   13110 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1025 16:14:56.081026   13110 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1025 16:14:56.082365   13110 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1025 16:14:56.082546   13110 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1025 16:14:56.084423   13110 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1025 16:14:56.084450   13110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1025 16:14:56.128950   13110 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1025 16:14:56.128964   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1025 16:14:56.143677   13110 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1025 16:14:56.172988   13110 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1025 16:14:56.173057   13110 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1025 16:14:56.173077   13110 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1025 16:14:56.173142   13110 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1025 16:14:56.176970   13110 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1025 16:14:56.182984   13110 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1025 16:14:56.192533   13110 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1025 16:14:56.192556   13110 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1025 16:14:56.192617   13110 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1025 16:14:56.202431   13110 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1025 16:14:56.231642   13110 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1025 16:14:56.242630   13110 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1025 16:14:56.242650   13110 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1025 16:14:56.242715   13110 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1025 16:14:56.253017   13110 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1025 16:14:56.323098   13110 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1025 16:14:56.333682   13110 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1025 16:14:56.333702   13110 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1025 16:14:56.333767   13110 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1025 16:14:56.344075   13110 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1025 16:14:56.344220   13110 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1025 16:14:56.345820   13110 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1025 16:14:56.345838   13110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1025 16:14:56.353412   13110 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1025 16:14:56.353422   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1025 16:14:56.379746   13110 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W1025 16:14:56.399369   13110 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1025 16:14:56.399533   13110 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 16:14:56.410099   13110 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1025 16:14:56.410122   13110 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 16:14:56.410189   13110 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 16:14:56.424039   13110 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1025 16:14:56.424182   13110 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1025 16:14:56.425638   13110 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1025 16:14:56.425651   13110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1025 16:14:56.455603   13110 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1025 16:14:56.455618   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1025 16:14:56.693941   13110 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1025 16:14:56.693985   13110 cache_images.go:92] duration metric: took 1.222564542s to LoadCachedImages
	W1025 16:14:56.694029   13110 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I1025 16:14:56.694037   13110 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1025 16:14:56.694092   13110 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-782000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 16:14:56.694164   13110 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1025 16:14:56.710243   13110 cni.go:84] Creating CNI manager for ""
	I1025 16:14:56.710263   13110 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 16:14:56.710274   13110 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1025 16:14:56.710285   13110 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-782000 NodeName:stopped-upgrade-782000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 16:14:56.710370   13110 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-782000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 16:14:56.710451   13110 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1025 16:14:56.713294   13110 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 16:14:56.713333   13110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 16:14:56.716207   13110 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1025 16:14:56.721426   13110 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 16:14:56.726242   13110 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1025 16:14:56.731383   13110 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1025 16:14:56.732627   13110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 16:14:56.736347   13110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 16:14:56.816349   13110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 16:14:56.822002   13110 certs.go:68] Setting up /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000 for IP: 10.0.2.15
	I1025 16:14:56.822012   13110 certs.go:194] generating shared ca certs ...
	I1025 16:14:56.822021   13110 certs.go:226] acquiring lock for ca certs: {Name:mk87b032e78a00eded37575daed7123f238f6628 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:14:56.822195   13110 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19758-10490/.minikube/ca.key
	I1025 16:14:56.822900   13110 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19758-10490/.minikube/proxy-client-ca.key
	I1025 16:14:56.822912   13110 certs.go:256] generating profile certs ...
	I1025 16:14:56.823110   13110 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/client.key
	I1025 16:14:56.823126   13110 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/apiserver.key.7d1b60be
	I1025 16:14:56.823138   13110 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/apiserver.crt.7d1b60be with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1025 16:14:56.866141   13110 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/apiserver.crt.7d1b60be ...
	I1025 16:14:56.866158   13110 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/apiserver.crt.7d1b60be: {Name:mk5d3a3941a8b7fcac917f24ade71303566e028d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:14:56.866720   13110 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/apiserver.key.7d1b60be ...
	I1025 16:14:56.866731   13110 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/apiserver.key.7d1b60be: {Name:mk95d51aefdc6fb2c116ec879843759c674e4078 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:14:56.866907   13110 certs.go:381] copying /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/apiserver.crt.7d1b60be -> /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/apiserver.crt
	I1025 16:14:56.867030   13110 certs.go:385] copying /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/apiserver.key.7d1b60be -> /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/apiserver.key
	I1025 16:14:56.867250   13110 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/proxy-client.key
	I1025 16:14:56.867400   13110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/10998.pem (1338 bytes)
	W1025 16:14:56.867558   13110 certs.go:480] ignoring /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/10998_empty.pem, impossibly tiny 0 bytes
	I1025 16:14:56.867565   13110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 16:14:56.867585   13110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem (1078 bytes)
	I1025 16:14:56.867606   13110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem (1123 bytes)
	I1025 16:14:56.867636   13110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/key.pem (1675 bytes)
	I1025 16:14:56.867678   13110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19758-10490/.minikube/files/etc/ssl/certs/109982.pem (1708 bytes)
	I1025 16:14:56.868084   13110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 16:14:56.876351   13110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 16:14:56.883596   13110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 16:14:56.891388   13110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 16:14:56.898370   13110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1025 16:14:56.904902   13110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 16:14:56.911936   13110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 16:14:56.919643   13110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 16:14:56.926758   13110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 16:14:56.933633   13110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/10998.pem --> /usr/share/ca-certificates/10998.pem (1338 bytes)
	I1025 16:14:56.940531   13110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/files/etc/ssl/certs/109982.pem --> /usr/share/ca-certificates/109982.pem (1708 bytes)
	I1025 16:14:56.947723   13110 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 16:14:56.953172   13110 ssh_runner.go:195] Run: openssl version
	I1025 16:14:56.955089   13110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 16:14:56.958042   13110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 16:14:56.959553   13110 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 23:10 /usr/share/ca-certificates/minikubeCA.pem
	I1025 16:14:56.959581   13110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 16:14:56.961479   13110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 16:14:56.964604   13110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10998.pem && ln -fs /usr/share/ca-certificates/10998.pem /etc/ssl/certs/10998.pem"
	I1025 16:14:56.968140   13110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10998.pem
	I1025 16:14:56.969745   13110 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 22:58 /usr/share/ca-certificates/10998.pem
	I1025 16:14:56.969772   13110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10998.pem
	I1025 16:14:56.971530   13110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10998.pem /etc/ssl/certs/51391683.0"
	I1025 16:14:56.974919   13110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109982.pem && ln -fs /usr/share/ca-certificates/109982.pem /etc/ssl/certs/109982.pem"
	I1025 16:14:56.978030   13110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109982.pem
	I1025 16:14:56.979423   13110 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 22:58 /usr/share/ca-certificates/109982.pem
	I1025 16:14:56.979452   13110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109982.pem
	I1025 16:14:56.981265   13110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109982.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 16:14:56.984359   13110 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 16:14:56.985771   13110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 16:14:56.988355   13110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 16:14:56.990382   13110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 16:14:56.992556   13110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 16:14:56.994329   13110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 16:14:56.996096   13110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 16:14:56.998034   13110 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-782000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:62397 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1025 16:14:56.998110   13110 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 16:14:57.008270   13110 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 16:14:57.011868   13110 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1025 16:14:57.011878   13110 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1025 16:14:57.011912   13110 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 16:14:57.015325   13110 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 16:14:57.015784   13110 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-782000" does not appear in /Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:14:57.015906   13110 kubeconfig.go:62] /Users/jenkins/minikube-integration/19758-10490/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-782000" cluster setting kubeconfig missing "stopped-upgrade-782000" context setting]
	I1025 16:14:57.016116   13110 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19758-10490/kubeconfig: {Name:mkab4c8ddad2dcb8cd5939090920ae3e3753785d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:14:57.016561   13110 kapi.go:59] client config for stopped-upgrade-782000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/client.key", CAFile:"/Users/jenkins/minikube-integration/19758-10490/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106a82510), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 16:14:57.017066   13110 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 16:14:57.019857   13110 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-782000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1025 16:14:57.019861   13110 kubeadm.go:1160] stopping kube-system containers ...
	I1025 16:14:57.019908   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 16:14:57.030686   13110 docker.go:483] Stopping containers: [7f47591e8309 8ef6282e225f 0f8b5253d658 50a050a9e75c 85a87d3c29bf fcc2487cc3e0 56ee9cb5f7b9 cc624b1f4264]
	I1025 16:14:57.030754   13110 ssh_runner.go:195] Run: docker stop 7f47591e8309 8ef6282e225f 0f8b5253d658 50a050a9e75c 85a87d3c29bf fcc2487cc3e0 56ee9cb5f7b9 cc624b1f4264
	I1025 16:14:57.046152   13110 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1025 16:14:57.051638   13110 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 16:14:57.054838   13110 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 16:14:57.054844   13110 kubeadm.go:157] found existing configuration files:
	
	I1025 16:14:57.054876   13110 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62397 /etc/kubernetes/admin.conf
	I1025 16:14:57.057535   13110 kubeadm.go:163] "https://control-plane.minikube.internal:62397" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:62397 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 16:14:57.057566   13110 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 16:14:57.060348   13110 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62397 /etc/kubernetes/kubelet.conf
	I1025 16:14:57.063384   13110 kubeadm.go:163] "https://control-plane.minikube.internal:62397" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:62397 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 16:14:57.063408   13110 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 16:14:57.066360   13110 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62397 /etc/kubernetes/controller-manager.conf
	I1025 16:14:57.068868   13110 kubeadm.go:163] "https://control-plane.minikube.internal:62397" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:62397 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 16:14:57.068891   13110 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 16:14:57.071802   13110 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62397 /etc/kubernetes/scheduler.conf
	I1025 16:14:57.074846   13110 kubeadm.go:163] "https://control-plane.minikube.internal:62397" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:62397 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 16:14:57.074876   13110 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 16:14:57.077298   13110 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 16:14:57.080269   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 16:14:57.103056   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 16:14:57.402795   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1025 16:14:57.535117   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 16:14:57.565675   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1025 16:14:57.589237   13110 api_server.go:52] waiting for apiserver process to appear ...
	I1025 16:14:57.589323   13110 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 16:14:58.091393   13110 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 16:14:53.885422   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:14:58.591410   13110 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 16:14:58.602892   13110 api_server.go:72] duration metric: took 1.013660666s to wait for apiserver process to appear ...
	I1025 16:14:58.602911   13110 api_server.go:88] waiting for apiserver healthz status ...
	I1025 16:14:58.602929   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:14:58.887580   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:14:58.887681   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:14:58.898695   12967 logs.go:282] 2 containers: [051c2bdfeab6 c0de5082f75b]
	I1025 16:14:58.898779   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:14:58.909622   12967 logs.go:282] 2 containers: [256aa04c6fe0 cf10a1d31713]
	I1025 16:14:58.909705   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:14:58.920475   12967 logs.go:282] 1 containers: [f1c00fb6a691]
	I1025 16:14:58.920561   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:14:58.931548   12967 logs.go:282] 2 containers: [3324ee378a14 8b920158411f]
	I1025 16:14:58.931628   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:14:58.942486   12967 logs.go:282] 1 containers: [6126e2846c92]
	I1025 16:14:58.942590   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:14:58.953684   12967 logs.go:282] 2 containers: [23bd841a0a0f f7d087d3ed95]
	I1025 16:14:58.953788   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:14:58.964393   12967 logs.go:282] 0 containers: []
	W1025 16:14:58.964403   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:14:58.964472   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:14:58.975232   12967 logs.go:282] 2 containers: [4793573f620f d638f46e9df9]
	I1025 16:14:58.975249   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:14:58.975254   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:14:59.020274   12967 logs.go:123] Gathering logs for kube-scheduler [3324ee378a14] ...
	I1025 16:14:59.020286   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3324ee378a14"
	I1025 16:14:59.032392   12967 logs.go:123] Gathering logs for storage-provisioner [4793573f620f] ...
	I1025 16:14:59.032402   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4793573f620f"
	I1025 16:14:59.043979   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:14:59.043995   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:14:59.067941   12967 logs.go:123] Gathering logs for kube-apiserver [c0de5082f75b] ...
	I1025 16:14:59.067950   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0de5082f75b"
	I1025 16:14:59.092741   12967 logs.go:123] Gathering logs for etcd [cf10a1d31713] ...
	I1025 16:14:59.092780   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10a1d31713"
	I1025 16:14:59.106179   12967 logs.go:123] Gathering logs for kube-controller-manager [23bd841a0a0f] ...
	I1025 16:14:59.106189   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bd841a0a0f"
	I1025 16:14:59.127161   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:14:59.127175   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:14:59.163993   12967 logs.go:123] Gathering logs for etcd [256aa04c6fe0] ...
	I1025 16:14:59.164007   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 256aa04c6fe0"
	I1025 16:14:59.178608   12967 logs.go:123] Gathering logs for storage-provisioner [d638f46e9df9] ...
	I1025 16:14:59.178623   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d638f46e9df9"
	I1025 16:14:59.200120   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:14:59.200131   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:14:59.213055   12967 logs.go:123] Gathering logs for kube-proxy [6126e2846c92] ...
	I1025 16:14:59.213067   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6126e2846c92"
	I1025 16:14:59.225252   12967 logs.go:123] Gathering logs for kube-controller-manager [f7d087d3ed95] ...
	I1025 16:14:59.225264   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7d087d3ed95"
	I1025 16:14:59.237371   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:14:59.237381   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:14:59.242110   12967 logs.go:123] Gathering logs for kube-apiserver [051c2bdfeab6] ...
	I1025 16:14:59.242119   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 051c2bdfeab6"
	I1025 16:14:59.267435   12967 logs.go:123] Gathering logs for coredns [f1c00fb6a691] ...
	I1025 16:14:59.267449   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c00fb6a691"
	I1025 16:14:59.278758   12967 logs.go:123] Gathering logs for kube-scheduler [8b920158411f] ...
	I1025 16:14:59.278770   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b920158411f"
	I1025 16:15:01.796287   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:15:03.605044   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:15:03.605115   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:15:06.798756   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:15:06.798925   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:15:06.813993   12967 logs.go:282] 2 containers: [051c2bdfeab6 c0de5082f75b]
	I1025 16:15:06.814080   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:15:06.826393   12967 logs.go:282] 2 containers: [256aa04c6fe0 cf10a1d31713]
	I1025 16:15:06.826480   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:15:06.839452   12967 logs.go:282] 1 containers: [f1c00fb6a691]
	I1025 16:15:06.839532   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:15:06.850139   12967 logs.go:282] 2 containers: [3324ee378a14 8b920158411f]
	I1025 16:15:06.850219   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:15:06.861310   12967 logs.go:282] 1 containers: [6126e2846c92]
	I1025 16:15:06.861385   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:15:06.872197   12967 logs.go:282] 2 containers: [23bd841a0a0f f7d087d3ed95]
	I1025 16:15:06.872282   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:15:06.882212   12967 logs.go:282] 0 containers: []
	W1025 16:15:06.882226   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:15:06.882287   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:15:06.892834   12967 logs.go:282] 2 containers: [4793573f620f d638f46e9df9]
	I1025 16:15:06.892852   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:15:06.892858   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:15:06.897819   12967 logs.go:123] Gathering logs for etcd [cf10a1d31713] ...
	I1025 16:15:06.897828   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10a1d31713"
	I1025 16:15:06.911263   12967 logs.go:123] Gathering logs for coredns [f1c00fb6a691] ...
	I1025 16:15:06.911275   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c00fb6a691"
	I1025 16:15:06.922914   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:15:06.922929   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:15:06.958603   12967 logs.go:123] Gathering logs for kube-scheduler [8b920158411f] ...
	I1025 16:15:06.958614   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b920158411f"
	I1025 16:15:06.970429   12967 logs.go:123] Gathering logs for kube-controller-manager [23bd841a0a0f] ...
	I1025 16:15:06.970439   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bd841a0a0f"
	I1025 16:15:06.989202   12967 logs.go:123] Gathering logs for kube-controller-manager [f7d087d3ed95] ...
	I1025 16:15:06.989212   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7d087d3ed95"
	I1025 16:15:07.000449   12967 logs.go:123] Gathering logs for kube-apiserver [051c2bdfeab6] ...
	I1025 16:15:07.000460   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 051c2bdfeab6"
	I1025 16:15:07.017837   12967 logs.go:123] Gathering logs for kube-apiserver [c0de5082f75b] ...
	I1025 16:15:07.017850   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0de5082f75b"
	I1025 16:15:07.043522   12967 logs.go:123] Gathering logs for etcd [256aa04c6fe0] ...
	I1025 16:15:07.043533   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 256aa04c6fe0"
	I1025 16:15:07.058331   12967 logs.go:123] Gathering logs for kube-proxy [6126e2846c92] ...
	I1025 16:15:07.058341   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6126e2846c92"
	I1025 16:15:07.069717   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:15:07.069728   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:15:07.111583   12967 logs.go:123] Gathering logs for kube-scheduler [3324ee378a14] ...
	I1025 16:15:07.111595   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3324ee378a14"
	I1025 16:15:07.123498   12967 logs.go:123] Gathering logs for storage-provisioner [4793573f620f] ...
	I1025 16:15:07.123510   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4793573f620f"
	I1025 16:15:07.135262   12967 logs.go:123] Gathering logs for storage-provisioner [d638f46e9df9] ...
	I1025 16:15:07.135277   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d638f46e9df9"
	I1025 16:15:07.147135   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:15:07.147146   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:15:07.171348   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:15:07.171355   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:15:08.605554   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:15:08.605614   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:15:09.685090   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:15:13.606144   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:15:13.606209   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:15:14.687290   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:15:14.687476   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:15:14.698951   12967 logs.go:282] 2 containers: [051c2bdfeab6 c0de5082f75b]
	I1025 16:15:14.699036   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:15:14.710069   12967 logs.go:282] 2 containers: [256aa04c6fe0 cf10a1d31713]
	I1025 16:15:14.710149   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:15:14.720844   12967 logs.go:282] 1 containers: [f1c00fb6a691]
	I1025 16:15:14.720924   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:15:14.731631   12967 logs.go:282] 2 containers: [3324ee378a14 8b920158411f]
	I1025 16:15:14.731706   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:15:14.743108   12967 logs.go:282] 1 containers: [6126e2846c92]
	I1025 16:15:14.743186   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:15:14.754513   12967 logs.go:282] 2 containers: [23bd841a0a0f f7d087d3ed95]
	I1025 16:15:14.754599   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:15:14.766824   12967 logs.go:282] 0 containers: []
	W1025 16:15:14.766839   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:15:14.766908   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:15:14.777911   12967 logs.go:282] 2 containers: [4793573f620f d638f46e9df9]
	I1025 16:15:14.777931   12967 logs.go:123] Gathering logs for kube-scheduler [3324ee378a14] ...
	I1025 16:15:14.777936   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3324ee378a14"
	I1025 16:15:14.791262   12967 logs.go:123] Gathering logs for kube-scheduler [8b920158411f] ...
	I1025 16:15:14.791273   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b920158411f"
	I1025 16:15:14.805093   12967 logs.go:123] Gathering logs for kube-controller-manager [23bd841a0a0f] ...
	I1025 16:15:14.805104   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 23bd841a0a0f"
	I1025 16:15:14.822332   12967 logs.go:123] Gathering logs for storage-provisioner [4793573f620f] ...
	I1025 16:15:14.822342   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4793573f620f"
	I1025 16:15:14.833858   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:15:14.833870   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:15:14.858297   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:15:14.858308   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:15:14.870767   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:15:14.870778   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:15:14.911487   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:15:14.911497   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:15:14.946781   12967 logs.go:123] Gathering logs for etcd [cf10a1d31713] ...
	I1025 16:15:14.946794   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf10a1d31713"
	I1025 16:15:14.961200   12967 logs.go:123] Gathering logs for kube-controller-manager [f7d087d3ed95] ...
	I1025 16:15:14.961211   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7d087d3ed95"
	I1025 16:15:14.972625   12967 logs.go:123] Gathering logs for storage-provisioner [d638f46e9df9] ...
	I1025 16:15:14.972636   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d638f46e9df9"
	I1025 16:15:14.983839   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:15:14.983849   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:15:14.988229   12967 logs.go:123] Gathering logs for kube-apiserver [051c2bdfeab6] ...
	I1025 16:15:14.988237   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 051c2bdfeab6"
	I1025 16:15:15.002010   12967 logs.go:123] Gathering logs for coredns [f1c00fb6a691] ...
	I1025 16:15:15.002018   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1c00fb6a691"
	I1025 16:15:15.013446   12967 logs.go:123] Gathering logs for kube-proxy [6126e2846c92] ...
	I1025 16:15:15.013457   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6126e2846c92"
	I1025 16:15:15.029936   12967 logs.go:123] Gathering logs for kube-apiserver [c0de5082f75b] ...
	I1025 16:15:15.029946   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0de5082f75b"
	I1025 16:15:15.055265   12967 logs.go:123] Gathering logs for etcd [256aa04c6fe0] ...
	I1025 16:15:15.055276   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 256aa04c6fe0"
	I1025 16:15:17.571969   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:15:18.607040   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:15:18.607091   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:15:22.572737   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:15:22.572830   12967 kubeadm.go:597] duration metric: took 4m4.539642833s to restartPrimaryControlPlane
	W1025 16:15:22.572936   12967 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1025 16:15:22.572974   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1025 16:15:23.563938   12967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 16:15:23.569062   12967 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 16:15:23.572073   12967 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 16:15:23.574740   12967 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 16:15:23.574749   12967 kubeadm.go:157] found existing configuration files:
	
	I1025 16:15:23.574784   12967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62164 /etc/kubernetes/admin.conf
	I1025 16:15:23.577314   12967 kubeadm.go:163] "https://control-plane.minikube.internal:62164" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:62164 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 16:15:23.577342   12967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 16:15:23.580286   12967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62164 /etc/kubernetes/kubelet.conf
	I1025 16:15:23.582928   12967 kubeadm.go:163] "https://control-plane.minikube.internal:62164" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:62164 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 16:15:23.582951   12967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 16:15:23.585828   12967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62164 /etc/kubernetes/controller-manager.conf
	I1025 16:15:23.588921   12967 kubeadm.go:163] "https://control-plane.minikube.internal:62164" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:62164 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 16:15:23.588953   12967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 16:15:23.591944   12967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62164 /etc/kubernetes/scheduler.conf
	I1025 16:15:23.594475   12967 kubeadm.go:163] "https://control-plane.minikube.internal:62164" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:62164 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 16:15:23.594497   12967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 16:15:23.597632   12967 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1025 16:15:23.616423   12967 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1025 16:15:23.616447   12967 kubeadm.go:310] [preflight] Running pre-flight checks
	I1025 16:15:23.671046   12967 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 16:15:23.671105   12967 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 16:15:23.671162   12967 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 16:15:23.723298   12967 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 16:15:23.728352   12967 out.go:235]   - Generating certificates and keys ...
	I1025 16:15:23.728385   12967 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1025 16:15:23.728422   12967 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1025 16:15:23.728464   12967 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1025 16:15:23.728496   12967 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1025 16:15:23.728554   12967 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1025 16:15:23.728582   12967 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1025 16:15:23.728612   12967 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1025 16:15:23.728643   12967 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1025 16:15:23.728686   12967 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1025 16:15:23.728726   12967 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1025 16:15:23.728749   12967 kubeadm.go:310] [certs] Using the existing "sa" key
	I1025 16:15:23.728801   12967 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 16:15:23.805818   12967 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 16:15:23.978702   12967 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 16:15:24.028020   12967 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 16:15:24.154365   12967 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 16:15:24.188741   12967 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 16:15:24.189237   12967 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 16:15:24.189264   12967 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1025 16:15:24.277619   12967 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 16:15:23.608022   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:15:23.608048   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:15:24.281830   12967 out.go:235]   - Booting up control plane ...
	I1025 16:15:24.281872   12967 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 16:15:24.281924   12967 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 16:15:24.282004   12967 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 16:15:24.282049   12967 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 16:15:24.282201   12967 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 16:15:28.784100   12967 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501702 seconds
	I1025 16:15:28.784191   12967 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 16:15:28.789056   12967 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 16:15:29.301490   12967 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 16:15:29.301696   12967 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-023000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 16:15:29.804642   12967 kubeadm.go:310] [bootstrap-token] Using token: cmr7v0.1vzgagcd1x6m03eo
	I1025 16:15:29.810594   12967 out.go:235]   - Configuring RBAC rules ...
	I1025 16:15:29.810662   12967 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 16:15:29.810707   12967 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 16:15:29.814489   12967 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 16:15:29.815878   12967 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 16:15:29.816929   12967 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 16:15:29.817879   12967 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 16:15:29.820997   12967 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 16:15:30.015136   12967 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1025 16:15:30.210064   12967 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1025 16:15:30.210565   12967 kubeadm.go:310] 
	I1025 16:15:30.210602   12967 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1025 16:15:30.210608   12967 kubeadm.go:310] 
	I1025 16:15:30.210646   12967 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1025 16:15:30.210650   12967 kubeadm.go:310] 
	I1025 16:15:30.210700   12967 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1025 16:15:30.210738   12967 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 16:15:30.210771   12967 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 16:15:30.210805   12967 kubeadm.go:310] 
	I1025 16:15:30.210877   12967 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1025 16:15:30.210884   12967 kubeadm.go:310] 
	I1025 16:15:30.210912   12967 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 16:15:30.210917   12967 kubeadm.go:310] 
	I1025 16:15:30.210967   12967 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1025 16:15:30.211033   12967 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 16:15:30.211108   12967 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 16:15:30.211113   12967 kubeadm.go:310] 
	I1025 16:15:30.211164   12967 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 16:15:30.211207   12967 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1025 16:15:30.211211   12967 kubeadm.go:310] 
	I1025 16:15:30.211251   12967 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token cmr7v0.1vzgagcd1x6m03eo \
	I1025 16:15:30.211306   12967 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0ffd2fddcca542d38aed4b14aa54bdac916e7b257b7596865a537c11b5cfb0fe \
	I1025 16:15:30.211319   12967 kubeadm.go:310] 	--control-plane 
	I1025 16:15:30.211322   12967 kubeadm.go:310] 
	I1025 16:15:30.211363   12967 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1025 16:15:30.211367   12967 kubeadm.go:310] 
	I1025 16:15:30.211414   12967 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token cmr7v0.1vzgagcd1x6m03eo \
	I1025 16:15:30.211472   12967 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0ffd2fddcca542d38aed4b14aa54bdac916e7b257b7596865a537c11b5cfb0fe 
	I1025 16:15:30.211532   12967 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 16:15:30.211539   12967 cni.go:84] Creating CNI manager for ""
	I1025 16:15:30.211549   12967 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 16:15:30.218956   12967 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1025 16:15:30.229776   12967 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1025 16:15:30.232758   12967 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1025 16:15:30.237851   12967 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 16:15:30.237905   12967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 16:15:30.237923   12967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-023000 minikube.k8s.io/updated_at=2024_10_25T16_15_30_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=db65f53f04c460e02d289e77cb94648c116e89dc minikube.k8s.io/name=running-upgrade-023000 minikube.k8s.io/primary=true
	I1025 16:15:30.279791   12967 ops.go:34] apiserver oom_adj: -16
	I1025 16:15:30.280347   12967 kubeadm.go:1113] duration metric: took 42.486875ms to wait for elevateKubeSystemPrivileges
	I1025 16:15:30.280356   12967 kubeadm.go:394] duration metric: took 4m12.262003292s to StartCluster
	I1025 16:15:30.280364   12967 settings.go:142] acquiring lock: {Name:mkc7ffce42494ff0056038ca2482eba326c60c40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:15:30.280557   12967 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:15:30.280946   12967 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19758-10490/kubeconfig: {Name:mkab4c8ddad2dcb8cd5939090920ae3e3753785d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:15:30.281174   12967 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:15:30.281216   12967 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 16:15:30.281250   12967 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-023000"
	I1025 16:15:30.281263   12967 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-023000"
	W1025 16:15:30.281266   12967 addons.go:243] addon storage-provisioner should already be in state true
	I1025 16:15:30.281280   12967 host.go:66] Checking if "running-upgrade-023000" exists ...
	I1025 16:15:30.281279   12967 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-023000"
	I1025 16:15:30.281288   12967 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-023000"
	I1025 16:15:30.281371   12967 config.go:182] Loaded profile config "running-upgrade-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 16:15:30.282496   12967 kapi.go:59] client config for running-upgrade-023000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/running-upgrade-023000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/running-upgrade-023000/client.key", CAFile:"/Users/jenkins/minikube-integration/19758-10490/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104cbe510), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 16:15:30.282894   12967 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-023000"
	W1025 16:15:30.282899   12967 addons.go:243] addon default-storageclass should already be in state true
	I1025 16:15:30.282907   12967 host.go:66] Checking if "running-upgrade-023000" exists ...
	I1025 16:15:30.284957   12967 out.go:177] * Verifying Kubernetes components...
	I1025 16:15:30.285290   12967 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 16:15:30.289155   12967 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 16:15:30.289163   12967 sshutil.go:53] new ssh client: &{IP:localhost Port:62132 SSHKeyPath:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/running-upgrade-023000/id_rsa Username:docker}
	I1025 16:15:30.292955   12967 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 16:15:28.609000   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:15:28.609021   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:15:30.296911   12967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 16:15:30.300991   12967 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 16:15:30.300998   12967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 16:15:30.301005   12967 sshutil.go:53] new ssh client: &{IP:localhost Port:62132 SSHKeyPath:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/running-upgrade-023000/id_rsa Username:docker}
	I1025 16:15:30.389107   12967 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 16:15:30.394373   12967 api_server.go:52] waiting for apiserver process to appear ...
	I1025 16:15:30.394425   12967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 16:15:30.398795   12967 api_server.go:72] duration metric: took 117.606666ms to wait for apiserver process to appear ...
	I1025 16:15:30.398803   12967 api_server.go:88] waiting for apiserver healthz status ...
	I1025 16:15:30.398811   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:15:30.427333   12967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 16:15:30.454418   12967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 16:15:30.767569   12967 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1025 16:15:30.767582   12967 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1025 16:15:33.610327   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:15:33.610370   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:15:35.400939   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:15:35.401013   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:15:38.610730   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:15:38.610750   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:15:40.401432   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:15:40.401463   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:15:43.612442   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:15:43.612466   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:15:45.401929   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:15:45.401999   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:15:48.612709   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:15:48.612752   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:15:50.402643   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:15:50.402695   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:15:53.615029   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:15:53.615072   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:15:55.403213   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:15:55.403254   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:16:00.403805   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:16:00.403845   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1025 16:16:00.769793   12967 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1025 16:16:00.775025   12967 out.go:177] * Enabled addons: storage-provisioner
	I1025 16:15:58.617362   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:15:58.617555   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:15:58.638539   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:15:58.638645   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:15:58.653142   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:15:58.653234   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:15:58.666280   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:15:58.666362   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:15:58.676894   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:15:58.676975   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:15:58.687817   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:15:58.687906   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:15:58.698101   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:15:58.698183   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:15:58.708846   13110 logs.go:282] 0 containers: []
	W1025 16:15:58.708856   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:15:58.708920   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:15:58.719448   13110 logs.go:282] 0 containers: []
	W1025 16:15:58.719460   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:15:58.719470   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:15:58.719475   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:15:58.756977   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:15:58.756996   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:15:58.869128   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:15:58.869139   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:15:58.896208   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:15:58.896218   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:15:58.908361   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:15:58.908372   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:15:58.920365   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:15:58.920377   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:15:58.935148   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:15:58.935162   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:15:58.952550   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:15:58.952560   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:15:58.957227   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:15:58.957234   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:15:58.971521   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:15:58.971532   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:15:58.988759   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:15:58.988769   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:15:59.014813   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:15:59.014822   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:15:59.029215   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:15:59.029226   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:15:59.044472   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:15:59.044483   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:15:59.055649   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:15:59.055659   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:16:01.569188   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:16:00.782859   12967 addons.go:510] duration metric: took 30.501853667s for enable addons: enabled=[storage-provisioner]
	I1025 16:16:06.571414   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:16:06.571588   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:16:06.582695   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:16:06.582789   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:16:06.593267   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:16:06.593350   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:16:06.603884   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:16:06.603961   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:16:06.615625   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:16:06.615711   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:16:06.631307   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:16:06.631393   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:16:06.641985   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:16:06.642065   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:16:06.652426   13110 logs.go:282] 0 containers: []
	W1025 16:16:06.652439   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:16:06.652505   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:16:06.662354   13110 logs.go:282] 0 containers: []
	W1025 16:16:06.662367   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:16:06.662373   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:16:06.662378   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:16:06.676345   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:16:06.676355   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:16:06.690870   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:16:06.690881   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:16:06.702913   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:16:06.702923   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:16:06.740411   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:16:06.740422   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:16:06.769963   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:16:06.769973   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:16:06.781552   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:16:06.781563   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:16:06.792923   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:16:06.792936   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:16:06.810006   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:16:06.810020   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:16:06.824605   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:16:06.824616   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:16:06.838339   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:16:06.838349   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:16:06.877194   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:16:06.877203   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:16:06.881571   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:16:06.881579   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:16:06.895938   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:16:06.895947   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:16:06.909777   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:16:06.909787   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:16:05.404940   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:16:05.404994   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:16:09.436824   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:16:10.406487   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:16:10.406540   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:16:14.439234   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:16:14.439531   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:16:14.465473   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:16:14.465631   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:16:14.482353   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:16:14.482453   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:16:14.495897   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:16:14.495980   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:16:14.507541   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:16:14.507624   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:16:14.517669   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:16:14.517742   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:16:14.528848   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:16:14.528926   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:16:14.539922   13110 logs.go:282] 0 containers: []
	W1025 16:16:14.539934   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:16:14.540003   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:16:14.550099   13110 logs.go:282] 0 containers: []
	W1025 16:16:14.550112   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:16:14.550121   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:16:14.550127   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:16:14.554202   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:16:14.554210   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:16:14.588645   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:16:14.588658   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:16:14.604082   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:16:14.604093   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:16:14.641266   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:16:14.641275   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:16:14.659504   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:16:14.659518   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:16:14.673494   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:16:14.673503   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:16:14.699777   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:16:14.699784   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:16:14.711741   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:16:14.711751   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:16:14.737569   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:16:14.737578   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:16:14.749519   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:16:14.749530   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:16:14.767356   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:16:14.767367   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:16:14.778478   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:16:14.778493   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:16:14.791760   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:16:14.791771   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:16:14.803595   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:16:14.803606   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:16:17.323109   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:16:15.408535   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:16:15.408564   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:16:22.325417   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:16:22.325603   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:16:22.340769   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:16:22.340869   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:16:22.352980   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:16:22.353059   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:16:22.363719   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:16:22.363801   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:16:22.374508   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:16:22.374592   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:16:22.385003   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:16:22.385083   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:16:22.395675   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:16:22.395750   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:16:22.406154   13110 logs.go:282] 0 containers: []
	W1025 16:16:22.406168   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:16:22.406231   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:16:22.416510   13110 logs.go:282] 0 containers: []
	W1025 16:16:22.416523   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:16:22.416531   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:16:22.416536   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:16:22.435896   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:16:22.435908   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:16:22.454631   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:16:22.454643   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:16:22.466416   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:16:22.466427   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:16:22.470702   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:16:22.470710   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:16:22.505728   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:16:22.505741   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:16:22.517172   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:16:22.517187   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:16:22.540863   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:16:22.540870   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:16:22.565642   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:16:22.565653   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:16:22.589148   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:16:22.589158   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:16:22.603535   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:16:22.603546   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:16:22.617741   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:16:22.617750   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:16:22.629441   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:16:22.629450   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:16:22.645938   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:16:22.645948   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:16:22.683486   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:16:22.683494   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:16:20.410760   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:16:20.410802   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:16:25.196922   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:16:25.413117   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:16:25.413178   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:16:30.199245   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:16:30.199380   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:16:30.210654   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:16:30.210739   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:16:30.221092   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:16:30.221173   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:16:30.231425   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:16:30.231507   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:16:30.242016   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:16:30.242090   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:16:30.252653   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:16:30.252729   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:16:30.267120   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:16:30.267194   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:16:30.277559   13110 logs.go:282] 0 containers: []
	W1025 16:16:30.277572   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:16:30.277645   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:16:30.290009   13110 logs.go:282] 0 containers: []
	W1025 16:16:30.290023   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:16:30.290030   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:16:30.290037   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:16:30.328022   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:16:30.328035   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:16:30.332123   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:16:30.332129   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:16:30.345823   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:16:30.345834   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:16:30.360516   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:16:30.360525   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:16:30.372202   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:16:30.372213   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:16:30.390033   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:16:30.390043   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:16:30.403769   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:16:30.403783   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:16:30.429181   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:16:30.429194   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:16:30.442323   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:16:30.442338   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:16:30.456002   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:16:30.456016   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:16:30.483659   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:16:30.483675   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:16:30.502722   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:16:30.502734   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:16:30.539683   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:16:30.539694   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:16:30.555140   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:16:30.555149   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:16:33.069822   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:16:30.413488   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:16:30.413596   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:16:30.425286   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:16:30.425371   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:16:30.436688   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:16:30.436774   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:16:30.458882   12967 logs.go:282] 2 containers: [24408302c429 e4a8eaea1752]
	I1025 16:16:30.458981   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:16:30.475712   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:16:30.475801   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:16:30.488906   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:16:30.488995   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:16:30.500166   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:16:30.500253   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:16:30.511962   12967 logs.go:282] 0 containers: []
	W1025 16:16:30.511973   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:16:30.512047   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:16:30.523026   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:16:30.523041   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:16:30.523047   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:16:30.548576   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:16:30.548595   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:16:30.553858   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:16:30.553869   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:16:30.569299   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:16:30.569309   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:16:30.581593   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:16:30.581608   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:16:30.599109   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:16:30.599124   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:16:30.613798   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:16:30.613808   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:16:30.625993   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:16:30.626008   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:16:30.637566   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:16:30.637578   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:16:30.649421   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:16:30.649431   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:16:30.686232   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:16:30.686239   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:16:30.721057   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:16:30.721067   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:16:30.734957   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:16:30.734966   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:16:33.248487   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:16:38.072069   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:16:38.072248   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:16:38.083510   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:16:38.083594   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:16:38.094331   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:16:38.094414   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:16:38.105513   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:16:38.105593   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:16:38.115681   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:16:38.115770   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:16:38.126200   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:16:38.126281   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:16:38.136846   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:16:38.136925   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:16:38.156076   13110 logs.go:282] 0 containers: []
	W1025 16:16:38.156091   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:16:38.156165   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:16:38.166170   13110 logs.go:282] 0 containers: []
	W1025 16:16:38.166181   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:16:38.166190   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:16:38.166197   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:16:38.171024   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:16:38.171030   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:16:38.185762   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:16:38.185771   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:16:38.197553   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:16:38.197568   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:16:38.209564   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:16:38.209575   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:16:38.223924   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:16:38.223936   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:16:38.255147   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:16:38.255161   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:16:38.268130   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:16:38.268142   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:16:38.249103   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:16:38.249198   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:16:38.261957   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:16:38.262042   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:16:38.274208   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:16:38.274291   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:16:38.286421   12967 logs.go:282] 2 containers: [24408302c429 e4a8eaea1752]
	I1025 16:16:38.286505   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:16:38.298238   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:16:38.298405   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:16:38.314004   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:16:38.314084   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:16:38.325115   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:16:38.325191   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:16:38.286353   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:16:38.286364   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:16:38.327823   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:16:38.327833   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:16:38.346632   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:16:38.346644   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:16:38.373256   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:16:38.373268   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:16:38.412374   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:16:38.412392   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:16:38.427110   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:16:38.427123   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:16:38.439637   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:16:38.439653   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:16:40.955284   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:16:38.340268   12967 logs.go:282] 0 containers: []
	W1025 16:16:38.340279   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:16:38.340351   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:16:38.351883   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:16:38.351900   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:16:38.351905   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:16:38.367123   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:16:38.367133   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:16:38.387704   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:16:38.387720   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:16:38.407321   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:16:38.407335   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:16:38.421034   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:16:38.421046   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:16:38.434207   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:16:38.434219   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:16:38.447392   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:16:38.447403   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:16:38.472967   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:16:38.472981   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:16:38.510378   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:16:38.510391   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:16:38.515380   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:16:38.515388   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:16:38.549763   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:16:38.549773   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:16:38.563893   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:16:38.563903   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:16:38.575618   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:16:38.575633   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:16:41.089478   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:16:45.957824   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:16:45.958336   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:16:45.996486   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:16:45.996640   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:16:46.015377   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:16:46.015479   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:16:46.033153   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:16:46.033244   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:16:46.056844   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:16:46.056955   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:16:46.068365   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:16:46.068438   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:16:46.078833   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:16:46.078903   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:16:46.089084   13110 logs.go:282] 0 containers: []
	W1025 16:16:46.089099   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:16:46.089157   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:16:46.100825   13110 logs.go:282] 0 containers: []
	W1025 16:16:46.100836   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:16:46.100845   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:16:46.100851   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:16:46.127524   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:16:46.127536   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:16:46.146789   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:16:46.146803   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:16:46.173313   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:16:46.173328   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:16:46.188076   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:16:46.188085   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:16:46.200025   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:16:46.200039   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:16:46.212698   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:16:46.212713   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:16:46.225382   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:16:46.225395   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:16:46.264604   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:16:46.264615   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:16:46.303937   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:16:46.303950   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:16:46.323681   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:16:46.323693   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:16:46.336212   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:16:46.336225   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:16:46.341004   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:16:46.341016   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:16:46.355772   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:16:46.355786   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:16:46.371221   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:16:46.371234   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:16:46.089929   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:16:46.089992   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:16:46.102306   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:16:46.102384   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:16:46.114057   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:16:46.114140   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:16:46.128563   12967 logs.go:282] 2 containers: [24408302c429 e4a8eaea1752]
	I1025 16:16:46.128641   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:16:46.140304   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:16:46.140389   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:16:46.152488   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:16:46.152578   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:16:46.163749   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:16:46.163825   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:16:46.174671   12967 logs.go:282] 0 containers: []
	W1025 16:16:46.174681   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:16:46.174750   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:16:46.185818   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:16:46.185836   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:16:46.185844   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:16:46.200904   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:16:46.200913   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:16:46.220022   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:16:46.220044   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:16:46.233379   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:16:46.233390   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:16:46.260913   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:16:46.260923   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:16:46.297693   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:16:46.297705   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:16:46.313608   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:16:46.313625   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:16:46.329939   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:16:46.329950   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:16:46.346502   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:16:46.346515   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:16:46.359229   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:16:46.359242   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:16:46.397254   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:16:46.397266   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:16:46.401994   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:16:46.402001   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:16:46.419858   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:16:46.419868   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:16:48.890062   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:16:48.933410   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:16:53.891837   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:16:53.892298   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:16:53.926306   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:16:53.926455   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:16:53.944936   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:16:53.945038   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:16:53.960171   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:16:53.960262   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:16:53.973364   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:16:53.973442   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:16:53.987767   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:16:53.987841   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:16:53.999577   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:16:53.999657   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:16:54.011030   13110 logs.go:282] 0 containers: []
	W1025 16:16:54.011040   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:16:54.011106   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:16:54.022826   13110 logs.go:282] 0 containers: []
	W1025 16:16:54.022837   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:16:54.022846   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:16:54.022851   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:16:54.039151   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:16:54.039168   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:16:54.055182   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:16:54.055197   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:16:54.093708   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:16:54.093720   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:16:54.098646   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:16:54.098662   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:16:54.112073   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:16:54.112088   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:16:54.132449   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:16:54.132463   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:16:54.159315   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:16:54.159326   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:16:54.191791   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:16:54.191809   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:16:54.205132   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:16:54.205147   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:16:54.222905   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:16:54.222922   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:16:54.236365   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:16:54.236379   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:16:54.249533   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:16:54.249544   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:16:54.262456   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:16:54.262467   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:16:54.303476   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:16:54.303486   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:16:56.819907   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:16:53.935263   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:16:53.935468   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:16:53.952504   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:16:53.952588   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:16:53.970116   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:16:53.970193   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:16:53.981885   12967 logs.go:282] 2 containers: [24408302c429 e4a8eaea1752]
	I1025 16:16:53.981961   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:16:53.993571   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:16:53.993645   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:16:54.005180   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:16:54.005262   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:16:54.017470   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:16:54.017549   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:16:54.029124   12967 logs.go:282] 0 containers: []
	W1025 16:16:54.029135   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:16:54.029205   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:16:54.040325   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:16:54.040338   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:16:54.040343   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:16:54.078314   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:16:54.078324   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:16:54.092949   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:16:54.092961   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:16:54.108689   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:16:54.108704   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:16:54.127686   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:16:54.127696   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:16:54.142516   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:16:54.142527   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:16:54.154519   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:16:54.154530   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:16:54.159631   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:16:54.159639   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:16:54.196875   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:16:54.196890   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:16:54.213013   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:16:54.213029   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:16:54.225438   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:16:54.225448   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:16:54.250533   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:16:54.250547   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:16:54.278049   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:16:54.278076   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:16:56.792681   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:17:01.820096   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:17:01.820237   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:17:01.838515   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:17:01.838610   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:17:01.851731   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:17:01.851817   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:17:01.862664   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:17:01.862753   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:17:01.874160   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:17:01.874239   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:17:01.885491   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:17:01.885570   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:17:01.896832   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:17:01.896912   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:17:01.907564   13110 logs.go:282] 0 containers: []
	W1025 16:17:01.907575   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:17:01.907645   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:17:01.922600   13110 logs.go:282] 0 containers: []
	W1025 16:17:01.922611   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:17:01.922619   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:17:01.922625   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:17:01.937970   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:17:01.937982   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:17:01.953179   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:17:01.953191   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:17:01.965713   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:17:01.965725   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:17:01.978131   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:17:01.978142   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:17:01.982604   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:17:01.982613   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:17:02.023769   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:17:02.023782   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:17:02.038728   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:17:02.038741   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:17:02.065273   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:17:02.065286   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:17:02.078285   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:17:02.078297   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:17:02.097236   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:17:02.097251   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:17:02.110361   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:17:02.110373   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:17:02.147717   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:17:02.147731   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:17:02.174935   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:17:02.174946   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:17:02.194133   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:17:02.194143   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:17:01.795043   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:17:01.795713   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:17:01.824338   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:17:01.824422   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:17:01.840908   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:17:01.840991   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:17:01.853791   12967 logs.go:282] 2 containers: [24408302c429 e4a8eaea1752]
	I1025 16:17:01.853859   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:17:01.865510   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:17:01.865610   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:17:01.877251   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:17:01.877337   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:17:01.888492   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:17:01.888566   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:17:01.899419   12967 logs.go:282] 0 containers: []
	W1025 16:17:01.899429   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:17:01.899492   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:17:01.911612   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:17:01.911625   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:17:01.911630   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:17:01.916544   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:17:01.916553   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:17:01.932471   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:17:01.932482   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:17:01.948654   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:17:01.948667   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:17:01.961549   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:17:01.961562   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:17:01.980956   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:17:01.980965   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:17:02.018283   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:17:02.018297   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:17:02.057636   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:17:02.057647   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:17:02.073572   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:17:02.073584   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:17:02.086851   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:17:02.086863   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:17:02.102603   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:17:02.102615   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:17:02.115343   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:17:02.115356   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:17:02.142661   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:17:02.142682   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:17:04.721253   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:17:04.657244   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:17:09.722038   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:17:09.722156   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:17:09.733792   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:17:09.733880   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:17:09.745539   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:17:09.745627   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:17:09.757118   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:17:09.757198   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:17:09.768223   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:17:09.768306   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:17:09.779308   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:17:09.779388   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:17:09.792086   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:17:09.792163   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:17:09.803754   13110 logs.go:282] 0 containers: []
	W1025 16:17:09.803804   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:17:09.803880   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:17:09.814584   13110 logs.go:282] 0 containers: []
	W1025 16:17:09.814594   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:17:09.814603   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:17:09.814608   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:17:09.855648   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:17:09.855668   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:17:09.871438   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:17:09.871454   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:17:09.890141   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:17:09.890157   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:17:09.917406   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:17:09.917426   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:17:09.933057   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:17:09.933067   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:17:09.962022   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:17:09.962031   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:17:09.966607   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:17:09.966620   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:17:10.004357   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:17:10.004369   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:17:10.023372   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:17:10.023383   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:17:10.037916   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:17:10.037926   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:17:10.050099   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:17:10.050110   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:17:10.062289   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:17:10.062299   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:17:10.074314   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:17:10.074325   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:17:10.092437   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:17:10.092447   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:17:12.604812   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:17:09.659424   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:17:09.659638   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:17:09.673668   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:17:09.673764   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:17:09.684861   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:17:09.684940   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:17:09.694986   12967 logs.go:282] 2 containers: [24408302c429 e4a8eaea1752]
	I1025 16:17:09.695069   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:17:09.712184   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:17:09.712262   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:17:09.722318   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:17:09.722363   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:17:09.737535   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:17:09.737622   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:17:09.749853   12967 logs.go:282] 0 containers: []
	W1025 16:17:09.749866   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:17:09.749937   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:17:09.761559   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:17:09.761573   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:17:09.761579   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:17:09.766689   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:17:09.766700   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:17:09.779335   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:17:09.779345   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:17:09.795244   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:17:09.795257   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:17:09.808108   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:17:09.808125   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:17:09.846017   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:17:09.846028   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:17:09.860806   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:17:09.860816   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:17:09.880345   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:17:09.880356   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:17:09.898789   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:17:09.898800   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:17:09.919227   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:17:09.919235   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:17:09.931757   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:17:09.931768   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:17:09.957821   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:17:09.957832   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:17:09.972392   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:17:09.972404   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:17:12.515342   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:17:17.605266   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:17:17.605366   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:17:17.617031   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:17:17.617114   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:17:17.629246   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:17:17.629341   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:17:17.640205   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:17:17.640280   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:17:17.652191   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:17:17.652274   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:17:17.665042   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:17:17.665125   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:17:17.676583   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:17:17.676661   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:17:17.687744   13110 logs.go:282] 0 containers: []
	W1025 16:17:17.687759   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:17:17.687830   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:17:17.701396   13110 logs.go:282] 0 containers: []
	W1025 16:17:17.701409   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:17:17.701419   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:17:17.701425   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:17:17.743670   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:17:17.743690   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:17:17.748262   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:17:17.748269   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:17:17.760002   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:17:17.760016   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:17:17.785745   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:17:17.785768   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:17:17.823858   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:17:17.823871   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:17:17.836347   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:17:17.836359   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:17:17.859705   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:17:17.859715   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:17:17.872810   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:17:17.872820   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:17:17.887186   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:17:17.887197   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:17:17.912960   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:17:17.912971   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:17:17.926828   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:17:17.926838   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:17:17.941254   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:17:17.941265   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:17:17.953599   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:17:17.953611   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:17:17.968608   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:17:17.968619   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:17:17.517585   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:17:17.517750   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:17:17.532454   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:17:17.532550   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:17:17.544593   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:17:17.544672   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:17:17.554938   12967 logs.go:282] 2 containers: [24408302c429 e4a8eaea1752]
	I1025 16:17:17.555022   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:17:17.565455   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:17:17.565529   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:17:17.575590   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:17:17.575666   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:17:17.586382   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:17:17.586530   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:17:17.596818   12967 logs.go:282] 0 containers: []
	W1025 16:17:17.596833   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:17:17.596899   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:17:17.607700   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:17:17.607711   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:17:17.607716   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:17:17.643975   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:17:17.643986   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:17:17.649007   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:17:17.649017   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:17:17.667965   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:17:17.667974   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:17:17.684235   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:17:17.684250   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:17:17.697540   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:17:17.697552   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:17:17.717756   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:17:17.717771   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:17:17.729976   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:17:17.729989   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:17:17.773516   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:17:17.773527   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:17:17.790034   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:17:17.790045   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:17:17.804028   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:17:17.804038   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:17:17.816291   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:17:17.816305   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:17:17.829505   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:17:17.829517   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:17:20.482441   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:17:20.356896   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:17:25.484575   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:17:25.484724   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:17:25.496834   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:17:25.496932   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:17:25.507962   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:17:25.508032   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:17:25.519674   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:17:25.519752   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:17:25.531331   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:17:25.531407   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:17:25.543047   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:17:25.543123   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:17:25.554661   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:17:25.554735   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:17:25.565827   13110 logs.go:282] 0 containers: []
	W1025 16:17:25.565839   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:17:25.565907   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:17:25.579571   13110 logs.go:282] 0 containers: []
	W1025 16:17:25.579581   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:17:25.579588   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:17:25.579593   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:17:25.593786   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:17:25.593800   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:17:25.606616   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:17:25.606628   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:17:25.631576   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:17:25.631585   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:17:25.668298   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:17:25.668316   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:17:25.694850   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:17:25.694861   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:17:25.706306   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:17:25.706318   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:17:25.720992   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:17:25.721003   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:17:25.738849   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:17:25.738860   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:17:25.775347   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:17:25.775358   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:17:25.796402   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:17:25.796412   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:17:25.808440   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:17:25.808451   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:17:25.820618   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:17:25.820628   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:17:25.832503   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:17:25.832528   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:17:25.837016   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:17:25.837022   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:17:25.359119   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:17:25.359306   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:17:25.377437   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:17:25.377547   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:17:25.391669   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:17:25.391747   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:17:25.405505   12967 logs.go:282] 2 containers: [24408302c429 e4a8eaea1752]
	I1025 16:17:25.405583   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:17:25.416142   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:17:25.416226   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:17:25.426432   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:17:25.426511   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:17:25.436802   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:17:25.436881   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:17:25.446802   12967 logs.go:282] 0 containers: []
	W1025 16:17:25.446817   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:17:25.446889   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:17:25.457106   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:17:25.457124   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:17:25.457131   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:17:25.468761   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:17:25.468771   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:17:25.480686   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:17:25.480702   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:17:25.495798   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:17:25.495810   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:17:25.512585   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:17:25.512601   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:17:25.525264   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:17:25.525274   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:17:25.544105   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:17:25.544117   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:17:25.557184   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:17:25.557193   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:17:25.569738   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:17:25.569750   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:17:25.595161   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:17:25.595170   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:17:25.631648   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:17:25.631656   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:17:25.636624   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:17:25.636635   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:17:25.680085   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:17:25.680097   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:17:28.196483   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:17:28.352762   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:17:33.198622   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:17:33.198757   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:17:33.211340   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:17:33.211432   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:17:33.222613   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:17:33.222703   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:17:33.233180   12967 logs.go:282] 4 containers: [7f00f3bb70a3 2eee8f96b914 24408302c429 e4a8eaea1752]
	I1025 16:17:33.233271   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:17:33.243072   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:17:33.243156   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:17:33.253803   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:17:33.253886   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:17:33.264427   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:17:33.264505   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:17:33.274693   12967 logs.go:282] 0 containers: []
	W1025 16:17:33.274704   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:17:33.274769   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:17:33.285957   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:17:33.285977   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:17:33.285985   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:17:33.297559   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:17:33.297569   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:17:33.309768   12967 logs.go:123] Gathering logs for coredns [2eee8f96b914] ...
	I1025 16:17:33.309780   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2eee8f96b914"
	I1025 16:17:33.321174   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:17:33.321187   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:17:33.355067   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:17:33.355164   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:17:33.367827   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:17:33.367909   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:17:33.379368   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:17:33.379456   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:17:33.391188   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:17:33.391275   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:17:33.402743   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:17:33.402831   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:17:33.413444   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:17:33.413532   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:17:33.424908   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:17:33.424997   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:17:33.436007   13110 logs.go:282] 0 containers: []
	W1025 16:17:33.436020   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:17:33.436092   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:17:33.447095   13110 logs.go:282] 0 containers: []
	W1025 16:17:33.447106   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:17:33.447115   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:17:33.447120   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:17:33.474755   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:17:33.474768   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:17:33.488574   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:17:33.488587   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:17:33.514933   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:17:33.514943   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:17:33.555929   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:17:33.555947   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:17:33.593701   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:17:33.593712   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:17:33.608736   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:17:33.608746   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:17:33.622721   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:17:33.622732   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:17:33.636934   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:17:33.636945   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:17:33.648100   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:17:33.648112   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:17:33.652784   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:17:33.652793   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:17:33.664332   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:17:33.664343   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:17:33.681089   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:17:33.681098   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:17:33.693913   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:17:33.693925   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:17:33.706028   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:17:33.706038   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:17:36.222423   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:17:33.336865   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:17:33.336877   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:17:33.361785   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:17:33.361802   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:17:33.367012   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:17:33.367025   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:17:33.447727   12967 logs.go:123] Gathering logs for coredns [7f00f3bb70a3] ...
	I1025 16:17:33.447735   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f00f3bb70a3"
	I1025 16:17:33.459722   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:17:33.459735   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:17:33.479410   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:17:33.479420   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:17:33.495481   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:17:33.495493   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:17:33.532124   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:17:33.532133   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:17:33.548621   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:17:33.548634   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:17:33.563235   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:17:33.563248   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:17:33.576536   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:17:33.576548   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:17:36.091047   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:17:41.223787   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:17:41.223880   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:17:41.235111   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:17:41.235194   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:17:41.245791   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:17:41.245874   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:17:41.257863   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:17:41.258069   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:17:41.269571   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:17:41.269666   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:17:41.283289   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:17:41.283360   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:17:41.295497   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:17:41.295573   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:17:41.306138   13110 logs.go:282] 0 containers: []
	W1025 16:17:41.306151   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:17:41.306259   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:17:41.317638   13110 logs.go:282] 0 containers: []
	W1025 16:17:41.317650   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:17:41.317657   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:17:41.317662   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:17:41.344098   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:17:41.344107   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:17:41.366064   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:17:41.366076   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:17:41.395601   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:17:41.395613   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:17:41.410500   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:17:41.410512   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:17:41.414957   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:17:41.414969   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:17:41.430024   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:17:41.430036   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:17:41.446391   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:17:41.446401   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:17:41.461596   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:17:41.461607   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:17:41.474591   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:17:41.474602   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:17:41.487247   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:17:41.487256   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:17:41.506711   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:17:41.506727   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:17:41.545819   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:17:41.545828   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:17:41.581160   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:17:41.581170   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:17:41.592851   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:17:41.592861   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:17:41.093115   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:17:41.093278   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:17:41.113018   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:17:41.113100   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:17:41.125511   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:17:41.125600   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:17:41.137585   12967 logs.go:282] 4 containers: [7f00f3bb70a3 2eee8f96b914 24408302c429 e4a8eaea1752]
	I1025 16:17:41.137670   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:17:41.148319   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:17:41.148395   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:17:41.164022   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:17:41.164117   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:17:41.174461   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:17:41.174540   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:17:41.185130   12967 logs.go:282] 0 containers: []
	W1025 16:17:41.185141   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:17:41.185212   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:17:41.195744   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:17:41.195765   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:17:41.195779   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:17:41.218973   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:17:41.218980   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:17:41.255367   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:17:41.255388   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:17:41.260570   12967 logs.go:123] Gathering logs for coredns [7f00f3bb70a3] ...
	I1025 16:17:41.260580   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f00f3bb70a3"
	I1025 16:17:41.272459   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:17:41.272471   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:17:41.289951   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:17:41.289967   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:17:41.302688   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:17:41.302698   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:17:41.344018   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:17:41.344028   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:17:41.362949   12967 logs.go:123] Gathering logs for coredns [2eee8f96b914] ...
	I1025 16:17:41.362960   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2eee8f96b914"
	I1025 16:17:41.376324   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:17:41.376337   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:17:41.400484   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:17:41.400498   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:17:41.415188   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:17:41.415198   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:17:41.431442   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:17:41.431451   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:17:41.448630   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:17:41.448645   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:17:41.461075   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:17:41.461091   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:17:44.120328   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:17:43.975887   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:17:49.121894   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:17:49.121993   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:17:49.133638   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:17:49.133724   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:17:49.145669   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:17:49.145749   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:17:49.158647   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:17:49.158732   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:17:49.178963   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:17:49.179051   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:17:49.189773   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:17:49.189855   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:17:49.203052   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:17:49.203155   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:17:49.213713   13110 logs.go:282] 0 containers: []
	W1025 16:17:49.213726   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:17:49.213802   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:17:49.224714   13110 logs.go:282] 0 containers: []
	W1025 16:17:49.224724   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:17:49.224731   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:17:49.224736   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:17:49.242967   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:17:49.242976   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:17:49.280237   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:17:49.280250   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:17:49.299372   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:17:49.299386   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:17:49.325812   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:17:49.325823   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:17:49.338658   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:17:49.338670   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:17:49.353829   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:17:49.353846   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:17:49.373390   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:17:49.373400   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:17:49.385780   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:17:49.385794   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:17:49.410300   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:17:49.410317   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:17:49.425678   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:17:49.425691   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:17:49.438208   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:17:49.438217   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:17:49.450425   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:17:49.450439   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:17:49.488233   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:17:49.488244   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:17:49.492979   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:17:49.492985   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:17:52.010396   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:17:48.978197   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:17:48.978681   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:17:49.006202   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:17:49.006355   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:17:49.024484   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:17:49.024584   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:17:49.038807   12967 logs.go:282] 4 containers: [7f00f3bb70a3 2eee8f96b914 24408302c429 e4a8eaea1752]
	I1025 16:17:49.038898   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:17:49.050031   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:17:49.050111   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:17:49.063869   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:17:49.063953   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:17:49.074312   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:17:49.074388   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:17:49.089010   12967 logs.go:282] 0 containers: []
	W1025 16:17:49.089023   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:17:49.089096   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:17:49.099577   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:17:49.099595   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:17:49.099601   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:17:49.111248   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:17:49.111258   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:17:49.129483   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:17:49.129499   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:17:49.148397   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:17:49.148407   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:17:49.166894   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:17:49.166905   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:17:49.193741   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:17:49.193759   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:17:49.207076   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:17:49.207088   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:17:49.245291   12967 logs.go:123] Gathering logs for coredns [2eee8f96b914] ...
	I1025 16:17:49.245301   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2eee8f96b914"
	I1025 16:17:49.257682   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:17:49.257694   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:17:49.263030   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:17:49.263042   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:17:49.301814   12967 logs.go:123] Gathering logs for coredns [7f00f3bb70a3] ...
	I1025 16:17:49.301824   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f00f3bb70a3"
	I1025 16:17:49.318032   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:17:49.318042   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:17:49.330863   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:17:49.330875   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:17:49.343453   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:17:49.343465   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:17:49.359246   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:17:49.359258   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:17:51.877003   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:17:57.012701   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:17:57.012813   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:17:57.031014   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:17:57.031093   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:17:57.042709   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:17:57.042793   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:17:57.054029   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:17:57.054128   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:17:57.066147   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:17:57.066233   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:17:57.079230   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:17:57.079315   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:17:57.091165   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:17:57.091254   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:17:57.102924   13110 logs.go:282] 0 containers: []
	W1025 16:17:57.102935   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:17:57.103007   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:17:57.115473   13110 logs.go:282] 0 containers: []
	W1025 16:17:57.115488   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:17:57.115499   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:17:57.115505   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:17:57.131129   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:17:57.131138   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:17:57.149498   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:17:57.149510   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:17:57.167879   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:17:57.167894   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:17:57.186394   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:17:57.186405   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:17:57.211593   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:17:57.211611   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:17:57.216698   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:17:57.216716   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:17:57.243028   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:17:57.243041   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:17:57.257977   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:17:57.257992   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:17:57.273682   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:17:57.273694   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:17:57.288331   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:17:57.288342   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:17:57.327184   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:17:57.327199   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:17:57.342055   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:17:57.342069   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:17:57.354382   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:17:57.354394   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:17:57.369146   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:17:57.369156   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:17:56.879611   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:17:56.879928   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:17:56.905797   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:17:56.905936   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:17:56.922865   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:17:56.922961   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:17:56.936039   12967 logs.go:282] 4 containers: [7f00f3bb70a3 2eee8f96b914 24408302c429 e4a8eaea1752]
	I1025 16:17:56.936130   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:17:56.947199   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:17:56.947279   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:17:56.957355   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:17:56.957442   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:17:56.967543   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:17:56.967625   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:17:56.982903   12967 logs.go:282] 0 containers: []
	W1025 16:17:56.982917   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:17:56.982987   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:17:56.993966   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:17:56.993983   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:17:56.993989   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:17:57.031403   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:17:57.031413   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:17:57.045868   12967 logs.go:123] Gathering logs for coredns [2eee8f96b914] ...
	I1025 16:17:57.045881   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2eee8f96b914"
	I1025 16:17:57.058638   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:17:57.058651   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:17:57.085020   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:17:57.085037   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:17:57.106114   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:17:57.106124   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:17:57.129288   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:17:57.129301   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:17:57.135025   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:17:57.135036   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:17:57.174471   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:17:57.174482   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:17:57.193099   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:17:57.193111   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:17:57.212372   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:17:57.212381   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:17:57.232425   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:17:57.232436   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:17:57.245903   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:17:57.245914   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:17:57.261051   12967 logs.go:123] Gathering logs for coredns [7f00f3bb70a3] ...
	I1025 16:17:57.261064   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f00f3bb70a3"
	I1025 16:17:57.273885   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:17:57.273893   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:17:59.911772   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:17:59.803132   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:18:04.914238   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:18:04.914341   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:18:04.925880   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:18:04.925968   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:18:04.937253   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:18:04.937335   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:18:04.950781   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:18:04.950864   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:18:04.962903   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:18:04.962990   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:18:04.973876   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:18:04.973955   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:18:04.985160   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:18:04.985243   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:18:05.001158   13110 logs.go:282] 0 containers: []
	W1025 16:18:05.001171   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:18:05.001242   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:18:05.012190   13110 logs.go:282] 0 containers: []
	W1025 16:18:05.012202   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:18:05.012209   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:18:05.012214   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:18:05.027685   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:18:05.027696   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:18:05.046227   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:18:05.046241   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:18:05.058040   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:18:05.058050   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:18:05.075852   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:18:05.075863   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:18:05.094920   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:18:05.094932   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:18:05.120533   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:18:05.120557   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:18:05.161704   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:18:05.161723   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:18:05.177518   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:18:05.177534   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:18:05.182059   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:18:05.182069   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:18:05.220717   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:18:05.220731   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:18:05.233376   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:18:05.233390   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:18:05.257510   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:18:05.257522   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:18:05.269392   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:18:05.269403   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:18:05.283083   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:18:05.283095   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:18:07.797003   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:18:04.806003   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:18:04.806536   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:18:04.843840   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:18:04.844000   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:18:04.865195   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:18:04.865298   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:18:04.880462   12967 logs.go:282] 4 containers: [7f00f3bb70a3 2eee8f96b914 24408302c429 e4a8eaea1752]
	I1025 16:18:04.880559   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:18:04.892774   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:18:04.892855   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:18:04.904532   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:18:04.904613   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:18:04.915156   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:18:04.915202   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:18:04.926989   12967 logs.go:282] 0 containers: []
	W1025 16:18:04.926996   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:18:04.927037   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:18:04.943812   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:18:04.943832   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:18:04.943838   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:18:04.957191   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:18:04.957202   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:18:04.977421   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:18:04.977432   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:18:05.004398   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:18:05.004409   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:18:05.043049   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:18:05.043063   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:18:05.055557   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:18:05.055568   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:18:05.069439   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:18:05.069451   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:18:05.084841   12967 logs.go:123] Gathering logs for coredns [2eee8f96b914] ...
	I1025 16:18:05.084857   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2eee8f96b914"
	I1025 16:18:05.097151   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:18:05.097158   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:18:05.115162   12967 logs.go:123] Gathering logs for coredns [7f00f3bb70a3] ...
	I1025 16:18:05.115173   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f00f3bb70a3"
	I1025 16:18:05.130772   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:18:05.130784   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:18:05.170946   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:18:05.170959   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:18:05.188133   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:18:05.188147   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:18:05.201174   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:18:05.201185   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:18:05.214291   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:18:05.214303   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:18:07.721321   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:18:12.798848   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:18:12.798931   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:18:12.810237   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:18:12.810315   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:18:12.821706   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:18:12.821790   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:18:12.833395   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:18:12.833478   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:18:12.843984   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:18:12.844063   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:18:12.854773   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:18:12.854852   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:18:12.866572   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:18:12.866650   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:18:12.878755   13110 logs.go:282] 0 containers: []
	W1025 16:18:12.878766   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:18:12.878840   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:18:12.890444   13110 logs.go:282] 0 containers: []
	W1025 16:18:12.890455   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:18:12.890463   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:18:12.890469   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:18:12.931197   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:18:12.931210   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:18:12.969021   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:18:12.969033   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:18:12.981455   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:18:12.981468   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:18:12.994334   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:18:12.994349   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:18:13.018246   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:18:13.018258   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:18:13.022503   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:18:13.022511   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:18:13.034144   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:18:13.034160   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:18:13.049997   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:18:13.050015   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:18:13.062824   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:18:13.062839   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:18:13.078333   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:18:13.078351   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:18:13.108981   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:18:13.108991   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:18:13.123065   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:18:13.123077   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:18:13.137970   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:18:13.137980   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:18:13.149453   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:18:13.149467   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:18:12.723518   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:18:12.723724   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:18:12.739543   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:18:12.739629   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:18:12.752461   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:18:12.752541   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:18:12.766796   12967 logs.go:282] 4 containers: [7f00f3bb70a3 2eee8f96b914 24408302c429 e4a8eaea1752]
	I1025 16:18:12.766873   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:18:12.781590   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:18:12.781673   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:18:12.791821   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:18:12.791905   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:18:12.802688   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:18:12.802765   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:18:12.814025   12967 logs.go:282] 0 containers: []
	W1025 16:18:12.814039   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:18:12.814112   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:18:12.825819   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:18:12.825836   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:18:12.825842   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:18:12.863591   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:18:12.863607   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:18:12.876742   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:18:12.876755   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:18:12.894201   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:18:12.894210   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:18:12.907460   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:18:12.907471   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:18:12.933599   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:18:12.933611   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:18:12.971998   12967 logs.go:123] Gathering logs for coredns [7f00f3bb70a3] ...
	I1025 16:18:12.972009   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f00f3bb70a3"
	I1025 16:18:12.985250   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:18:12.985261   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:18:12.998242   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:18:12.998255   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:18:13.021566   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:18:13.021576   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:18:13.035011   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:18:13.035020   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:18:13.040127   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:18:13.040139   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:18:13.054999   12967 logs.go:123] Gathering logs for coredns [2eee8f96b914] ...
	I1025 16:18:13.055011   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2eee8f96b914"
	I1025 16:18:13.068135   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:18:13.068147   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:18:13.083701   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:18:13.083712   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:18:15.672376   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:18:15.597491   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:18:20.674456   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:18:20.674516   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:18:20.686034   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:18:20.686073   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:18:20.697393   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:18:20.697479   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:18:20.708761   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:18:20.708844   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:18:20.720508   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:18:20.720596   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:18:20.731734   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:18:20.731815   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:18:20.743206   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:18:20.743289   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:18:20.755226   13110 logs.go:282] 0 containers: []
	W1025 16:18:20.755239   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:18:20.755313   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:18:20.770291   13110 logs.go:282] 0 containers: []
	W1025 16:18:20.770302   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:18:20.770310   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:18:20.770314   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:18:20.785574   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:18:20.785589   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:18:20.801503   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:18:20.801519   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:18:20.806198   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:18:20.806210   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:18:20.844049   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:18:20.844060   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:18:20.860370   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:18:20.860384   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:18:20.873068   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:18:20.873077   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:18:20.886722   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:18:20.886736   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:18:20.899409   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:18:20.899422   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:18:20.939181   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:18:20.939194   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:18:20.954445   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:18:20.954459   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:18:20.970529   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:18:20.970540   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:18:20.987467   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:18:20.987477   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:18:21.010824   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:18:21.010833   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:18:21.036460   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:18:21.036471   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:18:20.599929   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:18:20.600095   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:18:20.613695   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:18:20.613783   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:18:20.625205   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:18:20.625281   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:18:20.636368   12967 logs.go:282] 4 containers: [7f00f3bb70a3 2eee8f96b914 24408302c429 e4a8eaea1752]
	I1025 16:18:20.636455   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:18:20.646735   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:18:20.646812   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:18:20.663787   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:18:20.663862   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:18:20.674352   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:18:20.674431   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:18:20.685791   12967 logs.go:282] 0 containers: []
	W1025 16:18:20.685802   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:18:20.685876   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:18:20.697696   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:18:20.697710   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:18:20.697715   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:18:20.736862   12967 logs.go:123] Gathering logs for coredns [7f00f3bb70a3] ...
	I1025 16:18:20.736904   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f00f3bb70a3"
	I1025 16:18:20.750115   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:18:20.750126   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:18:20.767864   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:18:20.767875   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:18:20.808134   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:18:20.808142   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:18:20.821136   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:18:20.821147   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:18:20.834738   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:18:20.834751   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:18:20.839682   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:18:20.839693   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:18:20.855070   12967 logs.go:123] Gathering logs for coredns [2eee8f96b914] ...
	I1025 16:18:20.855087   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2eee8f96b914"
	I1025 16:18:20.871896   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:18:20.871909   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:18:20.884123   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:18:20.884135   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:18:20.903427   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:18:20.903436   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:18:20.916607   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:18:20.916619   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:18:20.942333   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:18:20.942346   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:18:20.958028   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:18:20.958038   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:18:23.552186   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:18:23.472679   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:18:28.554367   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:18:28.554467   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:18:28.566256   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:18:28.566342   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:18:28.580276   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:18:28.580358   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:18:28.592306   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:18:28.592389   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:18:28.603418   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:18:28.603501   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:18:28.615558   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:18:28.615641   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:18:28.627555   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:18:28.627631   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:18:28.638270   13110 logs.go:282] 0 containers: []
	W1025 16:18:28.638281   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:18:28.638353   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:18:28.649277   13110 logs.go:282] 0 containers: []
	W1025 16:18:28.649297   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:18:28.649363   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:18:28.649376   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:18:28.671731   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:18:28.671741   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:18:28.692548   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:18:28.692559   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:18:28.708268   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:18:28.708279   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:18:28.723440   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:18:28.723455   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:18:28.734903   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:18:28.734916   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:18:28.747750   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:18:28.747762   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:18:28.771742   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:18:28.771763   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:18:28.811644   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:18:28.811661   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:18:28.839308   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:18:28.839322   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:18:28.854960   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:18:28.854975   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:18:28.873768   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:18:28.873782   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:18:28.891443   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:18:28.891457   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:18:28.895596   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:18:28.895602   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:18:28.933931   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:18:28.933948   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:18:31.450182   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:18:28.474894   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:18:28.475034   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:18:28.487136   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:18:28.487226   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:18:28.497960   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:18:28.498031   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:18:28.510692   12967 logs.go:282] 4 containers: [7f00f3bb70a3 2eee8f96b914 24408302c429 e4a8eaea1752]
	I1025 16:18:28.510764   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:18:28.521635   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:18:28.521711   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:18:28.532610   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:18:28.532690   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:18:28.542928   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:18:28.543006   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:18:28.556627   12967 logs.go:282] 0 containers: []
	W1025 16:18:28.556637   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:18:28.556693   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:18:28.567933   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:18:28.567952   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:18:28.567958   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:18:28.583875   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:18:28.583886   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:18:28.596475   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:18:28.596488   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:18:28.609457   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:18:28.609470   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:18:28.646771   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:18:28.646785   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:18:28.659439   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:18:28.659450   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:18:28.664312   12967 logs.go:123] Gathering logs for coredns [7f00f3bb70a3] ...
	I1025 16:18:28.664320   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f00f3bb70a3"
	I1025 16:18:28.677645   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:18:28.677654   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:18:28.693825   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:18:28.693834   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:18:28.713718   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:18:28.713735   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:18:28.739624   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:18:28.739634   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:18:28.751968   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:18:28.751984   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:18:28.791313   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:18:28.791326   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:18:28.813986   12967 logs.go:123] Gathering logs for coredns [2eee8f96b914] ...
	I1025 16:18:28.814000   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2eee8f96b914"
	I1025 16:18:28.829975   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:18:28.829987   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:18:31.344490   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:18:36.452400   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:18:36.452482   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:18:36.464029   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:18:36.464109   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:18:36.475237   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:18:36.475315   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:18:36.493845   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:18:36.493925   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:18:36.505240   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:18:36.505324   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:18:36.518343   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:18:36.518421   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:18:36.530025   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:18:36.530107   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:18:36.541259   13110 logs.go:282] 0 containers: []
	W1025 16:18:36.541272   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:18:36.541347   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:18:36.552015   13110 logs.go:282] 0 containers: []
	W1025 16:18:36.552027   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:18:36.552034   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:18:36.552039   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:18:36.567493   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:18:36.567503   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:18:36.582329   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:18:36.582339   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:18:36.599534   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:18:36.599547   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:18:36.624358   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:18:36.624375   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:18:36.651332   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:18:36.651346   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:18:36.668258   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:18:36.668275   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:18:36.682639   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:18:36.682651   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:18:36.695562   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:18:36.695574   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:18:36.709157   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:18:36.709168   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:18:36.748233   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:18:36.748245   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:18:36.752431   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:18:36.752439   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:18:36.763797   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:18:36.763807   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:18:36.779354   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:18:36.779363   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:18:36.812660   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:18:36.812670   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:18:36.346883   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:18:36.347179   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:18:36.369824   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:18:36.369963   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:18:36.386233   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:18:36.386331   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:18:36.402429   12967 logs.go:282] 4 containers: [7f00f3bb70a3 2eee8f96b914 24408302c429 e4a8eaea1752]
	I1025 16:18:36.402518   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:18:36.413622   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:18:36.413703   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:18:36.423562   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:18:36.423640   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:18:36.434091   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:18:36.434167   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:18:36.444982   12967 logs.go:282] 0 containers: []
	W1025 16:18:36.444993   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:18:36.445058   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:18:36.455604   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:18:36.455620   12967 logs.go:123] Gathering logs for coredns [7f00f3bb70a3] ...
	I1025 16:18:36.455626   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f00f3bb70a3"
	I1025 16:18:36.468195   12967 logs.go:123] Gathering logs for coredns [2eee8f96b914] ...
	I1025 16:18:36.468204   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2eee8f96b914"
	I1025 16:18:36.480807   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:18:36.480817   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:18:36.498750   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:18:36.498763   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:18:36.540680   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:18:36.540694   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:18:36.553536   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:18:36.553544   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:18:36.578995   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:18:36.579009   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:18:36.614966   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:18:36.614978   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:18:36.619912   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:18:36.619921   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:18:36.632065   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:18:36.632076   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:18:36.644951   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:18:36.644963   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:18:36.658773   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:18:36.658786   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:18:36.675340   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:18:36.675353   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:18:36.689849   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:18:36.689865   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:18:36.702247   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:18:36.702259   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:18:39.338162   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:18:39.220550   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:18:44.340312   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:18:44.340426   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:18:44.352115   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:18:44.352199   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:18:44.363781   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:18:44.363868   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:18:44.375046   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:18:44.375129   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:18:44.386426   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:18:44.386518   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:18:44.399523   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:18:44.399606   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:18:44.411181   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:18:44.411267   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:18:44.422558   13110 logs.go:282] 0 containers: []
	W1025 16:18:44.422570   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:18:44.422637   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:18:44.434251   13110 logs.go:282] 0 containers: []
	W1025 16:18:44.434264   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:18:44.434275   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:18:44.434283   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:18:44.477280   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:18:44.477290   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:18:44.492742   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:18:44.492751   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:18:44.508041   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:18:44.508056   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:18:44.520602   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:18:44.520613   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:18:44.546963   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:18:44.546981   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:18:44.563471   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:18:44.563483   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:18:44.583432   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:18:44.583449   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:18:44.611645   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:18:44.611656   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:18:44.635130   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:18:44.635141   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:18:44.647564   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:18:44.647576   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:18:44.663946   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:18:44.663956   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:18:44.668352   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:18:44.668359   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:18:44.701908   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:18:44.701923   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:18:44.716051   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:18:44.716061   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:18:47.229922   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:18:44.223166   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:18:44.223442   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:18:44.246807   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:18:44.246947   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:18:44.263136   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:18:44.263238   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:18:44.275866   12967 logs.go:282] 4 containers: [7f00f3bb70a3 2eee8f96b914 24408302c429 e4a8eaea1752]
	I1025 16:18:44.275953   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:18:44.287731   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:18:44.287816   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:18:44.298121   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:18:44.298200   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:18:44.308735   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:18:44.308814   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:18:44.318715   12967 logs.go:282] 0 containers: []
	W1025 16:18:44.318727   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:18:44.318786   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:18:44.329408   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:18:44.329424   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:18:44.329430   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:18:44.341002   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:18:44.341010   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:18:44.346166   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:18:44.346179   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:18:44.361619   12967 logs.go:123] Gathering logs for coredns [2eee8f96b914] ...
	I1025 16:18:44.361631   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2eee8f96b914"
	I1025 16:18:44.374948   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:18:44.374961   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:18:44.388724   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:18:44.388733   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:18:44.404373   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:18:44.404387   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:18:44.423627   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:18:44.423637   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:18:44.461754   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:18:44.461770   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:18:44.476810   12967 logs.go:123] Gathering logs for coredns [7f00f3bb70a3] ...
	I1025 16:18:44.476820   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f00f3bb70a3"
	I1025 16:18:44.490067   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:18:44.490079   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:18:44.516769   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:18:44.516784   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:18:44.531674   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:18:44.531687   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:18:44.570884   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:18:44.570897   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:18:44.584598   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:18:44.584609   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:18:47.100214   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:18:52.232170   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:18:52.232240   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:18:52.243894   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:18:52.243976   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:18:52.255358   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:18:52.255442   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:18:52.266083   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:18:52.266168   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:18:52.277759   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:18:52.277855   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:18:52.291282   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:18:52.291371   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:18:52.305188   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:18:52.305293   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:18:52.316924   13110 logs.go:282] 0 containers: []
	W1025 16:18:52.316937   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:18:52.317013   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:18:52.329037   13110 logs.go:282] 0 containers: []
	W1025 16:18:52.329048   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:18:52.329056   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:18:52.329061   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:18:52.368061   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:18:52.368076   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:18:52.383366   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:18:52.383383   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:18:52.395666   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:18:52.395679   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:18:52.411885   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:18:52.411896   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:18:52.424907   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:18:52.424915   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:18:52.429208   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:18:52.429224   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:18:52.443877   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:18:52.443894   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:18:52.456404   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:18:52.456417   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:18:52.493235   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:18:52.493248   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:18:52.519203   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:18:52.519215   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:18:52.532922   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:18:52.532931   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:18:52.544681   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:18:52.544693   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:18:52.556149   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:18:52.556159   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:18:52.573202   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:18:52.573212   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:18:52.102810   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:18:52.103285   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:18:52.143043   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:18:52.143186   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:18:52.171193   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:18:52.171298   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:18:52.184324   12967 logs.go:282] 4 containers: [7f00f3bb70a3 2eee8f96b914 24408302c429 e4a8eaea1752]
	I1025 16:18:52.184400   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:18:52.195195   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:18:52.195282   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:18:52.206102   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:18:52.206185   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:18:52.217368   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:18:52.217453   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:18:52.228587   12967 logs.go:282] 0 containers: []
	W1025 16:18:52.228597   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:18:52.228665   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:18:52.240417   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:18:52.240435   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:18:52.240440   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:18:52.253236   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:18:52.253247   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:18:52.266653   12967 logs.go:123] Gathering logs for coredns [7f00f3bb70a3] ...
	I1025 16:18:52.266663   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f00f3bb70a3"
	I1025 16:18:52.280083   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:18:52.280092   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:18:52.295407   12967 logs.go:123] Gathering logs for coredns [2eee8f96b914] ...
	I1025 16:18:52.295419   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2eee8f96b914"
	I1025 16:18:52.308882   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:18:52.308895   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:18:52.325487   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:18:52.325504   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:18:52.351784   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:18:52.351795   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:18:52.364201   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:18:52.364214   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:18:52.403326   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:18:52.403338   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:18:52.419037   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:18:52.419051   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:18:52.424348   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:18:52.424363   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:18:52.436970   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:18:52.436982   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:18:52.462968   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:18:52.462979   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:18:52.475030   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:18:52.475042   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:18:55.099548   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:18:55.014530   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:19:00.102113   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:19:00.102153   13110 kubeadm.go:597] duration metric: took 4m3.09195725s to restartPrimaryControlPlane
	W1025 16:19:00.102184   13110 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1025 16:19:00.102197   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1025 16:19:01.119750   13110 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.017548167s)
	I1025 16:19:01.119823   13110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 16:19:01.125053   13110 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 16:19:01.128015   13110 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 16:19:01.130809   13110 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 16:19:01.130815   13110 kubeadm.go:157] found existing configuration files:
	
	I1025 16:19:01.130849   13110 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62397 /etc/kubernetes/admin.conf
	I1025 16:19:01.133485   13110 kubeadm.go:163] "https://control-plane.minikube.internal:62397" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:62397 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 16:19:01.133513   13110 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 16:19:01.136308   13110 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62397 /etc/kubernetes/kubelet.conf
	I1025 16:19:01.138828   13110 kubeadm.go:163] "https://control-plane.minikube.internal:62397" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:62397 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 16:19:01.138863   13110 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 16:19:01.141879   13110 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62397 /etc/kubernetes/controller-manager.conf
	I1025 16:19:01.145231   13110 kubeadm.go:163] "https://control-plane.minikube.internal:62397" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:62397 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 16:19:01.145259   13110 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 16:19:01.148191   13110 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62397 /etc/kubernetes/scheduler.conf
	I1025 16:19:01.150740   13110 kubeadm.go:163] "https://control-plane.minikube.internal:62397" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:62397 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 16:19:01.150766   13110 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 16:19:01.153836   13110 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1025 16:19:01.172314   13110 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1025 16:19:01.172380   13110 kubeadm.go:310] [preflight] Running pre-flight checks
	I1025 16:19:01.221600   13110 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 16:19:01.221658   13110 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 16:19:01.221710   13110 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 16:19:01.277969   13110 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 16:19:01.282202   13110 out.go:235]   - Generating certificates and keys ...
	I1025 16:19:01.282239   13110 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1025 16:19:01.282290   13110 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1025 16:19:01.282333   13110 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1025 16:19:01.282365   13110 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1025 16:19:01.282424   13110 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1025 16:19:01.282454   13110 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1025 16:19:01.282488   13110 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1025 16:19:01.282527   13110 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1025 16:19:01.282572   13110 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1025 16:19:01.282647   13110 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1025 16:19:01.282686   13110 kubeadm.go:310] [certs] Using the existing "sa" key
	I1025 16:19:01.282721   13110 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 16:19:01.426410   13110 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 16:19:01.545830   13110 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 16:19:01.638698   13110 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 16:19:01.758627   13110 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 16:19:01.787215   13110 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 16:19:01.787615   13110 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 16:19:01.787635   13110 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1025 16:19:01.870489   13110 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 16:19:01.874691   13110 out.go:235]   - Booting up control plane ...
	I1025 16:19:01.874742   13110 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 16:19:01.874774   13110 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 16:19:01.874818   13110 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 16:19:01.874862   13110 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 16:19:01.874970   13110 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 16:19:00.016247   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:19:00.016834   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:19:00.053915   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:19:00.054078   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:19:00.075514   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:19:00.075626   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:19:00.090162   12967 logs.go:282] 4 containers: [7f00f3bb70a3 2eee8f96b914 24408302c429 e4a8eaea1752]
	I1025 16:19:00.090255   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:19:00.105396   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:19:00.105480   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:19:00.123186   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:19:00.123272   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:19:00.139998   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:19:00.140085   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:19:00.151063   12967 logs.go:282] 0 containers: []
	W1025 16:19:00.151080   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:19:00.151151   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:19:00.162372   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:19:00.162392   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:19:00.162398   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:19:00.167217   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:19:00.167226   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:19:00.180196   12967 logs.go:123] Gathering logs for coredns [7f00f3bb70a3] ...
	I1025 16:19:00.180208   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f00f3bb70a3"
	I1025 16:19:00.192634   12967 logs.go:123] Gathering logs for coredns [2eee8f96b914] ...
	I1025 16:19:00.192647   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2eee8f96b914"
	I1025 16:19:00.205846   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:19:00.205858   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:19:00.217933   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:19:00.217946   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:19:00.233689   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:19:00.233701   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:19:00.251729   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:19:00.251743   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:19:00.278110   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:19:00.278132   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:19:00.316593   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:19:00.316610   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:19:00.332113   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:19:00.332127   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:19:00.347898   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:19:00.347913   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:19:00.363230   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:19:00.363244   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:19:00.376447   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:19:00.376459   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:19:00.414356   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:19:00.414375   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:19:02.929360   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:19:06.878654   13110 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.008494 seconds
	I1025 16:19:06.878715   13110 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 16:19:06.882031   13110 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 16:19:07.392646   13110 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 16:19:07.392810   13110 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-782000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 16:19:07.896838   13110 kubeadm.go:310] [bootstrap-token] Using token: kbudsc.ttqy1u5ja78iqr90
	I1025 16:19:07.903390   13110 out.go:235]   - Configuring RBAC rules ...
	I1025 16:19:07.903459   13110 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 16:19:07.903513   13110 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 16:19:07.905339   13110 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 16:19:07.910228   13110 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 16:19:07.910873   13110 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 16:19:07.911801   13110 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 16:19:07.914928   13110 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 16:19:08.061769   13110 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1025 16:19:08.301911   13110 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1025 16:19:08.302516   13110 kubeadm.go:310] 
	I1025 16:19:08.302618   13110 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1025 16:19:08.302631   13110 kubeadm.go:310] 
	I1025 16:19:08.302669   13110 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1025 16:19:08.302676   13110 kubeadm.go:310] 
	I1025 16:19:08.302693   13110 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1025 16:19:08.302724   13110 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 16:19:08.302767   13110 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 16:19:08.302777   13110 kubeadm.go:310] 
	I1025 16:19:08.302807   13110 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1025 16:19:08.302817   13110 kubeadm.go:310] 
	I1025 16:19:08.302842   13110 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 16:19:08.302844   13110 kubeadm.go:310] 
	I1025 16:19:08.302873   13110 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1025 16:19:08.302914   13110 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 16:19:08.302948   13110 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 16:19:08.302951   13110 kubeadm.go:310] 
	I1025 16:19:08.302996   13110 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 16:19:08.303043   13110 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1025 16:19:08.303053   13110 kubeadm.go:310] 
	I1025 16:19:08.303098   13110 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token kbudsc.ttqy1u5ja78iqr90 \
	I1025 16:19:08.303319   13110 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0ffd2fddcca542d38aed4b14aa54bdac916e7b257b7596865a537c11b5cfb0fe \
	I1025 16:19:08.303337   13110 kubeadm.go:310] 	--control-plane 
	I1025 16:19:08.303343   13110 kubeadm.go:310] 
	I1025 16:19:08.303391   13110 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1025 16:19:08.303395   13110 kubeadm.go:310] 
	I1025 16:19:08.303466   13110 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token kbudsc.ttqy1u5ja78iqr90 \
	I1025 16:19:08.303519   13110 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0ffd2fddcca542d38aed4b14aa54bdac916e7b257b7596865a537c11b5cfb0fe 
	I1025 16:19:08.303612   13110 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 16:19:08.303625   13110 cni.go:84] Creating CNI manager for ""
	I1025 16:19:08.303633   13110 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 16:19:08.306303   13110 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1025 16:19:07.931541   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:19:07.931655   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:19:07.943939   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:19:07.944026   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:19:07.959163   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:19:07.959242   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:19:07.970426   12967 logs.go:282] 4 containers: [7f00f3bb70a3 2eee8f96b914 24408302c429 e4a8eaea1752]
	I1025 16:19:07.970512   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:19:07.981156   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:19:07.981241   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:19:07.995485   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:19:07.995562   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:19:08.006748   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:19:08.006828   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:19:08.017510   12967 logs.go:282] 0 containers: []
	W1025 16:19:08.017524   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:19:08.017598   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:19:08.028448   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:19:08.028467   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:19:08.028473   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:19:08.033209   12967 logs.go:123] Gathering logs for coredns [7f00f3bb70a3] ...
	I1025 16:19:08.033216   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f00f3bb70a3"
	I1025 16:19:08.045965   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:19:08.045977   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:19:08.059152   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:19:08.059168   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:19:08.075716   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:19:08.075730   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:19:08.096200   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:19:08.096214   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:19:08.135309   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:19:08.135332   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:19:08.158223   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:19:08.158237   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:19:08.172333   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:19:08.172346   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:19:08.184971   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:19:08.184981   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:19:08.197051   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:19:08.197064   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:19:08.222915   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:19:08.222941   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:19:08.237725   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:19:08.237736   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:19:08.273884   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:19:08.273899   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:19:08.291075   12967 logs.go:123] Gathering logs for coredns [2eee8f96b914] ...
	I1025 16:19:08.291087   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2eee8f96b914"
	I1025 16:19:08.314220   13110 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1025 16:19:08.319083   13110 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1025 16:19:08.330017   13110 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 16:19:08.330092   13110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 16:19:08.330132   13110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-782000 minikube.k8s.io/updated_at=2024_10_25T16_19_08_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=db65f53f04c460e02d289e77cb94648c116e89dc minikube.k8s.io/name=stopped-upgrade-782000 minikube.k8s.io/primary=true
	I1025 16:19:08.370036   13110 kubeadm.go:1113] duration metric: took 40.009666ms to wait for elevateKubeSystemPrivileges
	I1025 16:19:08.370058   13110 ops.go:34] apiserver oom_adj: -16
	I1025 16:19:08.370166   13110 kubeadm.go:394] duration metric: took 4m11.373879375s to StartCluster
	I1025 16:19:08.370178   13110 settings.go:142] acquiring lock: {Name:mkc7ffce42494ff0056038ca2482eba326c60c40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:19:08.370277   13110 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:19:08.370676   13110 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19758-10490/kubeconfig: {Name:mkab4c8ddad2dcb8cd5939090920ae3e3753785d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:19:08.370864   13110 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:19:08.370903   13110 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 16:19:08.370976   13110 config.go:182] Loaded profile config "stopped-upgrade-782000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 16:19:08.370984   13110 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-782000"
	I1025 16:19:08.370991   13110 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-782000"
	W1025 16:19:08.370994   13110 addons.go:243] addon storage-provisioner should already be in state true
	I1025 16:19:08.371005   13110 host.go:66] Checking if "stopped-upgrade-782000" exists ...
	I1025 16:19:08.370991   13110 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-782000"
	I1025 16:19:08.371028   13110 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-782000"
	I1025 16:19:08.371466   13110 retry.go:31] will retry after 1.056223176s: connect: dial unix /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/stopped-upgrade-782000/monitor: connect: connection refused
	I1025 16:19:08.372246   13110 kapi.go:59] client config for stopped-upgrade-782000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/client.key", CAFile:"/Users/jenkins/minikube-integration/19758-10490/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106a82510), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 16:19:08.372372   13110 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-782000"
	W1025 16:19:08.372376   13110 addons.go:243] addon default-storageclass should already be in state true
	I1025 16:19:08.372382   13110 host.go:66] Checking if "stopped-upgrade-782000" exists ...
	I1025 16:19:08.372911   13110 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 16:19:08.372916   13110 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 16:19:08.372921   13110 sshutil.go:53] new ssh client: &{IP:localhost Port:62363 SSHKeyPath:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/stopped-upgrade-782000/id_rsa Username:docker}
	I1025 16:19:08.375285   13110 out.go:177] * Verifying Kubernetes components...
	I1025 16:19:08.385231   13110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 16:19:08.472668   13110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 16:19:08.478400   13110 api_server.go:52] waiting for apiserver process to appear ...
	I1025 16:19:08.478451   13110 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 16:19:08.482877   13110 api_server.go:72] duration metric: took 112.00025ms to wait for apiserver process to appear ...
	I1025 16:19:08.482887   13110 api_server.go:88] waiting for apiserver healthz status ...
	I1025 16:19:08.482893   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:19:08.540204   13110 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 16:19:08.870134   13110 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1025 16:19:08.870146   13110 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1025 16:19:09.432125   13110 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 16:19:09.436147   13110 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 16:19:09.436154   13110 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 16:19:09.436161   13110 sshutil.go:53] new ssh client: &{IP:localhost Port:62363 SSHKeyPath:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/stopped-upgrade-782000/id_rsa Username:docker}
	I1025 16:19:09.475531   13110 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 16:19:10.806762   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:19:13.484905   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:19:13.484930   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:19:15.807321   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:19:15.807458   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:19:15.819326   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:19:15.819414   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:19:15.830178   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:19:15.830272   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:19:15.841190   12967 logs.go:282] 4 containers: [7f00f3bb70a3 2eee8f96b914 24408302c429 e4a8eaea1752]
	I1025 16:19:15.841269   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:19:15.851745   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:19:15.851825   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:19:15.861934   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:19:15.862013   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:19:15.872836   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:19:15.872915   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:19:15.883163   12967 logs.go:282] 0 containers: []
	W1025 16:19:15.883174   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:19:15.883243   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:19:15.894081   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:19:15.894101   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:19:15.894108   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:19:15.905931   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:19:15.905944   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:19:15.910385   12967 logs.go:123] Gathering logs for coredns [7f00f3bb70a3] ...
	I1025 16:19:15.910395   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f00f3bb70a3"
	I1025 16:19:15.921982   12967 logs.go:123] Gathering logs for coredns [2eee8f96b914] ...
	I1025 16:19:15.921992   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2eee8f96b914"
	I1025 16:19:15.933893   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:19:15.933903   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:19:15.958488   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:19:15.958497   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:19:15.977661   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:19:15.977671   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:19:15.989228   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:19:15.989239   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:19:16.024656   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:19:16.024667   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:19:16.039715   12967 logs.go:123] Gathering logs for coredns [24408302c429] ...
	I1025 16:19:16.039732   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24408302c429"
	I1025 16:19:16.051490   12967 logs.go:123] Gathering logs for coredns [e4a8eaea1752] ...
	I1025 16:19:16.051499   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4a8eaea1752"
	I1025 16:19:16.063346   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:19:16.063361   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:19:16.098004   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:19:16.098015   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:19:16.111984   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:19:16.111994   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:19:16.128324   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:19:16.128336   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:19:18.485150   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:19:18.485203   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:19:18.641981   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:19:23.485508   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:19:23.485544   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:19:23.644132   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:19:23.644298   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:19:23.659481   12967 logs.go:282] 1 containers: [bcab9f9b8b31]
	I1025 16:19:23.659581   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:19:23.671385   12967 logs.go:282] 1 containers: [cd811d7278cd]
	I1025 16:19:23.671472   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:19:23.682476   12967 logs.go:282] 4 containers: [aa15ca65191d 887b6293ef77 7f00f3bb70a3 2eee8f96b914]
	I1025 16:19:23.682551   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:19:23.696641   12967 logs.go:282] 1 containers: [c3d4f5a54d4c]
	I1025 16:19:23.696730   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:19:23.707220   12967 logs.go:282] 1 containers: [676e38c84e0a]
	I1025 16:19:23.707293   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:19:23.717593   12967 logs.go:282] 1 containers: [3c45f9ff0428]
	I1025 16:19:23.717672   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:19:23.728393   12967 logs.go:282] 0 containers: []
	W1025 16:19:23.728406   12967 logs.go:284] No container was found matching "kindnet"
	I1025 16:19:23.728478   12967 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:19:23.739365   12967 logs.go:282] 1 containers: [79775a806b06]
	I1025 16:19:23.739383   12967 logs.go:123] Gathering logs for kube-apiserver [bcab9f9b8b31] ...
	I1025 16:19:23.739397   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcab9f9b8b31"
	I1025 16:19:23.754700   12967 logs.go:123] Gathering logs for kube-proxy [676e38c84e0a] ...
	I1025 16:19:23.754710   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 676e38c84e0a"
	I1025 16:19:23.767175   12967 logs.go:123] Gathering logs for kube-controller-manager [3c45f9ff0428] ...
	I1025 16:19:23.767188   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c45f9ff0428"
	I1025 16:19:23.784343   12967 logs.go:123] Gathering logs for container status ...
	I1025 16:19:23.784355   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:19:23.796618   12967 logs.go:123] Gathering logs for kubelet ...
	I1025 16:19:23.796633   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:19:23.830964   12967 logs.go:123] Gathering logs for coredns [887b6293ef77] ...
	I1025 16:19:23.830972   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 887b6293ef77"
	I1025 16:19:23.845446   12967 logs.go:123] Gathering logs for coredns [7f00f3bb70a3] ...
	I1025 16:19:23.845459   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f00f3bb70a3"
	I1025 16:19:23.857200   12967 logs.go:123] Gathering logs for coredns [2eee8f96b914] ...
	I1025 16:19:23.857211   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2eee8f96b914"
	I1025 16:19:23.871758   12967 logs.go:123] Gathering logs for kube-scheduler [c3d4f5a54d4c] ...
	I1025 16:19:23.871771   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3d4f5a54d4c"
	I1025 16:19:23.886563   12967 logs.go:123] Gathering logs for etcd [cd811d7278cd] ...
	I1025 16:19:23.886575   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd811d7278cd"
	I1025 16:19:23.900047   12967 logs.go:123] Gathering logs for coredns [aa15ca65191d] ...
	I1025 16:19:23.900056   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa15ca65191d"
	I1025 16:19:23.915685   12967 logs.go:123] Gathering logs for storage-provisioner [79775a806b06] ...
	I1025 16:19:23.915698   12967 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 79775a806b06"
	I1025 16:19:23.927563   12967 logs.go:123] Gathering logs for dmesg ...
	I1025 16:19:23.927574   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:19:23.931870   12967 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:19:23.931878   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:19:23.966949   12967 logs.go:123] Gathering logs for Docker ...
	I1025 16:19:23.966960   12967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:19:26.492930   12967 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:19:31.495185   12967 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:19:31.499640   12967 out.go:201] 
	W1025 16:19:31.502570   12967 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1025 16:19:31.502577   12967 out.go:270] * 
	W1025 16:19:31.503120   12967 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 16:19:31.514485   12967 out.go:201] 
	I1025 16:19:28.485932   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:19:28.485962   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:19:33.486439   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:19:33.486462   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:19:38.487078   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:19:38.487116   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1025 16:19:38.870903   13110 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1025 16:19:38.876252   13110 out.go:177] * Enabled addons: storage-provisioner
	I1025 16:19:38.887061   13110 addons.go:510] duration metric: took 30.516376583s for enable addons: enabled=[storage-provisioner]
	
	
	==> Docker <==
	-- Journal begins at Fri 2024-10-25 23:10:40 UTC, ends at Fri 2024-10-25 23:19:47 UTC. --
	Oct 25 23:19:23 running-upgrade-023000 cri-dockerd[3044]: time="2024-10-25T23:19:23Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 25 23:19:28 running-upgrade-023000 cri-dockerd[3044]: time="2024-10-25T23:19:28Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 25 23:19:31 running-upgrade-023000 cri-dockerd[3044]: time="2024-10-25T23:19:31Z" level=error msg="ContainerStats resp: {0x4000824a80 linux}"
	Oct 25 23:19:31 running-upgrade-023000 cri-dockerd[3044]: time="2024-10-25T23:19:31Z" level=error msg="ContainerStats resp: {0x4000825bc0 linux}"
	Oct 25 23:19:32 running-upgrade-023000 cri-dockerd[3044]: time="2024-10-25T23:19:32Z" level=error msg="ContainerStats resp: {0x40004f8780 linux}"
	Oct 25 23:19:33 running-upgrade-023000 cri-dockerd[3044]: time="2024-10-25T23:19:33Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 25 23:19:33 running-upgrade-023000 cri-dockerd[3044]: time="2024-10-25T23:19:33Z" level=error msg="ContainerStats resp: {0x40004f9b40 linux}"
	Oct 25 23:19:33 running-upgrade-023000 cri-dockerd[3044]: time="2024-10-25T23:19:33Z" level=error msg="ContainerStats resp: {0x40004f9e80 linux}"
	Oct 25 23:19:33 running-upgrade-023000 cri-dockerd[3044]: time="2024-10-25T23:19:33Z" level=error msg="ContainerStats resp: {0x4000357380 linux}"
	Oct 25 23:19:33 running-upgrade-023000 cri-dockerd[3044]: time="2024-10-25T23:19:33Z" level=error msg="ContainerStats resp: {0x40004855c0 linux}"
	Oct 25 23:19:33 running-upgrade-023000 cri-dockerd[3044]: time="2024-10-25T23:19:33Z" level=error msg="ContainerStats resp: {0x4000898b80 linux}"
	Oct 25 23:19:33 running-upgrade-023000 cri-dockerd[3044]: time="2024-10-25T23:19:33Z" level=error msg="ContainerStats resp: {0x4000899240 linux}"
	Oct 25 23:19:33 running-upgrade-023000 cri-dockerd[3044]: time="2024-10-25T23:19:33Z" level=error msg="ContainerStats resp: {0x4000485f80 linux}"
	Oct 25 23:19:38 running-upgrade-023000 cri-dockerd[3044]: time="2024-10-25T23:19:38Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 25 23:19:43 running-upgrade-023000 cri-dockerd[3044]: time="2024-10-25T23:19:43Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 25 23:19:43 running-upgrade-023000 cri-dockerd[3044]: time="2024-10-25T23:19:43Z" level=error msg="ContainerStats resp: {0x40008ef540 linux}"
	Oct 25 23:19:43 running-upgrade-023000 cri-dockerd[3044]: time="2024-10-25T23:19:43Z" level=error msg="ContainerStats resp: {0x40004f9980 linux}"
	Oct 25 23:19:44 running-upgrade-023000 cri-dockerd[3044]: time="2024-10-25T23:19:44Z" level=error msg="ContainerStats resp: {0x40007de3c0 linux}"
	Oct 25 23:19:45 running-upgrade-023000 cri-dockerd[3044]: time="2024-10-25T23:19:45Z" level=error msg="ContainerStats resp: {0x40007df1c0 linux}"
	Oct 25 23:19:45 running-upgrade-023000 cri-dockerd[3044]: time="2024-10-25T23:19:45Z" level=error msg="ContainerStats resp: {0x40007df380 linux}"
	Oct 25 23:19:45 running-upgrade-023000 cri-dockerd[3044]: time="2024-10-25T23:19:45Z" level=error msg="ContainerStats resp: {0x4000357080 linux}"
	Oct 25 23:19:45 running-upgrade-023000 cri-dockerd[3044]: time="2024-10-25T23:19:45Z" level=error msg="ContainerStats resp: {0x4000357dc0 linux}"
	Oct 25 23:19:45 running-upgrade-023000 cri-dockerd[3044]: time="2024-10-25T23:19:45Z" level=error msg="ContainerStats resp: {0x40008342c0 linux}"
	Oct 25 23:19:45 running-upgrade-023000 cri-dockerd[3044]: time="2024-10-25T23:19:45Z" level=error msg="ContainerStats resp: {0x4000834ac0 linux}"
	Oct 25 23:19:45 running-upgrade-023000 cri-dockerd[3044]: time="2024-10-25T23:19:45Z" level=error msg="ContainerStats resp: {0x4000835200 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	aa15ca65191d2       edaa71f2aee88       26 seconds ago      Running             coredns                   2                   0c7eeb29349e3
	887b6293ef77b       edaa71f2aee88       26 seconds ago      Running             coredns                   2                   28811dbf08f03
	7f00f3bb70a33       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   28811dbf08f03
	2eee8f96b9141       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   0c7eeb29349e3
	676e38c84e0ad       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   ce6df542d60aa
	79775a806b065       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   f772e2b4839b3
	c3d4f5a54d4cf       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   85ad6e2a80f90
	cd811d7278cd7       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   4d3757ef9c722
	3c45f9ff04284       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   bb56df1368af9
	bcab9f9b8b31b       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   445156fdec9fa
	
	
	==> coredns [2eee8f96b914] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2670286894963480319.2594967166441467645. HINFO: read udp 10.244.0.3:43839->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2670286894963480319.2594967166441467645. HINFO: read udp 10.244.0.3:57034->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2670286894963480319.2594967166441467645. HINFO: read udp 10.244.0.3:53339->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2670286894963480319.2594967166441467645. HINFO: read udp 10.244.0.3:48588->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2670286894963480319.2594967166441467645. HINFO: read udp 10.244.0.3:47817->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2670286894963480319.2594967166441467645. HINFO: read udp 10.244.0.3:54966->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2670286894963480319.2594967166441467645. HINFO: read udp 10.244.0.3:59939->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2670286894963480319.2594967166441467645. HINFO: read udp 10.244.0.3:60177->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2670286894963480319.2594967166441467645. HINFO: read udp 10.244.0.3:54769->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2670286894963480319.2594967166441467645. HINFO: read udp 10.244.0.3:34850->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7f00f3bb70a3] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 9080669653574082004.5798290245801120407. HINFO: read udp 10.244.0.2:45877->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9080669653574082004.5798290245801120407. HINFO: read udp 10.244.0.2:33958->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9080669653574082004.5798290245801120407. HINFO: read udp 10.244.0.2:48668->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9080669653574082004.5798290245801120407. HINFO: read udp 10.244.0.2:59115->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9080669653574082004.5798290245801120407. HINFO: read udp 10.244.0.2:48675->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9080669653574082004.5798290245801120407. HINFO: read udp 10.244.0.2:57903->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9080669653574082004.5798290245801120407. HINFO: read udp 10.244.0.2:42910->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9080669653574082004.5798290245801120407. HINFO: read udp 10.244.0.2:59871->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9080669653574082004.5798290245801120407. HINFO: read udp 10.244.0.2:52701->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9080669653574082004.5798290245801120407. HINFO: read udp 10.244.0.2:50083->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [887b6293ef77] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6219478690191894022.1247591945344101584. HINFO: read udp 10.244.0.2:52164->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6219478690191894022.1247591945344101584. HINFO: read udp 10.244.0.2:39241->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6219478690191894022.1247591945344101584. HINFO: read udp 10.244.0.2:47352->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6219478690191894022.1247591945344101584. HINFO: read udp 10.244.0.2:34844->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6219478690191894022.1247591945344101584. HINFO: read udp 10.244.0.2:56234->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6219478690191894022.1247591945344101584. HINFO: read udp 10.244.0.2:53884->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6219478690191894022.1247591945344101584. HINFO: read udp 10.244.0.2:43376->10.0.2.3:53: i/o timeout
	
	
	==> coredns [aa15ca65191d] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6699125333524845117.6351426886457474447. HINFO: read udp 10.244.0.3:55214->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6699125333524845117.6351426886457474447. HINFO: read udp 10.244.0.3:43441->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6699125333524845117.6351426886457474447. HINFO: read udp 10.244.0.3:48795->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6699125333524845117.6351426886457474447. HINFO: read udp 10.244.0.3:37676->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6699125333524845117.6351426886457474447. HINFO: read udp 10.244.0.3:58465->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6699125333524845117.6351426886457474447. HINFO: read udp 10.244.0.3:51021->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6699125333524845117.6351426886457474447. HINFO: read udp 10.244.0.3:49684->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-023000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-023000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=db65f53f04c460e02d289e77cb94648c116e89dc
	                    minikube.k8s.io/name=running-upgrade-023000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_25T16_15_30_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 25 Oct 2024 23:15:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-023000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 25 Oct 2024 23:19:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 25 Oct 2024 23:15:30 +0000   Fri, 25 Oct 2024 23:15:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 25 Oct 2024 23:15:30 +0000   Fri, 25 Oct 2024 23:15:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 25 Oct 2024 23:15:30 +0000   Fri, 25 Oct 2024 23:15:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 25 Oct 2024 23:15:30 +0000   Fri, 25 Oct 2024 23:15:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-023000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 c692a2b90b4741aabb63eee0c5fd0f92
	  System UUID:                c692a2b90b4741aabb63eee0c5fd0f92
	  Boot ID:                    cae1b47f-d304-455d-a836-41ce778f2942
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-5dx4q                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 coredns-6d4b75cb6d-d4bkr                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 etcd-running-upgrade-023000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m18s
	  kube-system                 kube-apiserver-running-upgrade-023000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-023000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-proxy-hpdm2                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 kube-scheduler-running-upgrade-023000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m23s (x5 over 4m23s)  kubelet          Node running-upgrade-023000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-023000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-023000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeAllocatableEnforced  4m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s                  kubelet          Node running-upgrade-023000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s                  kubelet          Node running-upgrade-023000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s                  kubelet          Node running-upgrade-023000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m17s                  kubelet          Node running-upgrade-023000 status is now: NodeReady
	  Normal  Starting                 4m17s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m5s                   node-controller  Node running-upgrade-023000 event: Registered Node running-upgrade-023000 in Controller
	
	
	==> dmesg <==
	[  +1.632280] systemd-fstab-generator[876]: Ignoring "noauto" for root device
	[  +0.063887] systemd-fstab-generator[887]: Ignoring "noauto" for root device
	[  +0.089944] systemd-fstab-generator[898]: Ignoring "noauto" for root device
	[  +1.139257] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.091505] systemd-fstab-generator[1048]: Ignoring "noauto" for root device
	[  +0.081126] systemd-fstab-generator[1059]: Ignoring "noauto" for root device
	[  +2.554626] systemd-fstab-generator[1287]: Ignoring "noauto" for root device
	[Oct25 23:11] systemd-fstab-generator[1928]: Ignoring "noauto" for root device
	[  +2.516450] systemd-fstab-generator[2204]: Ignoring "noauto" for root device
	[  +0.182345] systemd-fstab-generator[2242]: Ignoring "noauto" for root device
	[  +0.097970] systemd-fstab-generator[2253]: Ignoring "noauto" for root device
	[  +0.093522] systemd-fstab-generator[2266]: Ignoring "noauto" for root device
	[  +2.580931] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.208898] systemd-fstab-generator[2999]: Ignoring "noauto" for root device
	[  +0.076959] systemd-fstab-generator[3012]: Ignoring "noauto" for root device
	[  +0.079503] systemd-fstab-generator[3023]: Ignoring "noauto" for root device
	[  +0.094292] systemd-fstab-generator[3037]: Ignoring "noauto" for root device
	[  +2.368788] systemd-fstab-generator[3190]: Ignoring "noauto" for root device
	[  +2.668103] systemd-fstab-generator[3725]: Ignoring "noauto" for root device
	[  +1.560742] systemd-fstab-generator[4074]: Ignoring "noauto" for root device
	[ +17.916015] kauditd_printk_skb: 68 callbacks suppressed
	[Oct25 23:15] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.405971] systemd-fstab-generator[12122]: Ignoring "noauto" for root device
	[  +5.631549] systemd-fstab-generator[12709]: Ignoring "noauto" for root device
	[  +0.475718] systemd-fstab-generator[12845]: Ignoring "noauto" for root device
	
	
	==> etcd [cd811d7278cd] <==
	{"level":"info","ts":"2024-10-25T23:15:25.439Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-25T23:15:25.439Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-25T23:15:25.439Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-10-25T23:15:25.439Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-10-25T23:15:25.439Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-10-25T23:15:25.439Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-10-25T23:15:25.439Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-10-25T23:15:26.132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-25T23:15:26.132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-25T23:15:26.132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-10-25T23:15:26.132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-10-25T23:15:26.132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-10-25T23:15:26.132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-10-25T23:15:26.132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-10-25T23:15:26.133Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-023000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-25T23:15:26.133Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-25T23:15:26.133Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-25T23:15:26.133Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-25T23:15:26.134Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-25T23:15:26.134Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-25T23:15:26.134Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-25T23:15:26.134Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-25T23:15:26.134Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-10-25T23:15:26.134Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-25T23:15:26.134Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 23:19:47 up 9 min,  0 users,  load average: 0.25, 0.22, 0.10
	Linux running-upgrade-023000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [bcab9f9b8b31] <==
	I1025 23:15:27.307315       1 controller.go:611] quota admission added evaluator for: namespaces
	I1025 23:15:27.346173       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1025 23:15:27.346199       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 23:15:27.354998       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1025 23:15:27.355025       1 cache.go:39] Caches are synced for autoregister controller
	I1025 23:15:27.355148       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1025 23:15:27.358220       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1025 23:15:28.075361       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1025 23:15:28.258575       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 23:15:28.261124       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 23:15:28.261145       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 23:15:28.422225       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 23:15:28.435594       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 23:15:28.512070       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W1025 23:15:28.514152       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I1025 23:15:28.514565       1 controller.go:611] quota admission added evaluator for: endpoints
	I1025 23:15:28.516035       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 23:15:29.377946       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1025 23:15:30.022431       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1025 23:15:30.026599       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I1025 23:15:30.031140       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1025 23:15:30.080139       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 23:15:42.732647       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I1025 23:15:43.032241       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I1025 23:15:43.976877       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [3c45f9ff0428] <==
	I1025 23:15:42.235847       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I1025 23:15:42.235850       1 shared_informer.go:262] Caches are synced for cidrallocator
	I1025 23:15:42.238362       1 shared_informer.go:262] Caches are synced for ReplicationController
	I1025 23:15:42.238492       1 range_allocator.go:374] Set node running-upgrade-023000 PodCIDR to [10.244.0.0/24]
	I1025 23:15:42.279161       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I1025 23:15:42.281334       1 shared_informer.go:262] Caches are synced for crt configmap
	I1025 23:15:42.330687       1 shared_informer.go:262] Caches are synced for taint
	I1025 23:15:42.330724       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W1025 23:15:42.330753       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-023000. Assuming now as a timestamp.
	I1025 23:15:42.330773       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I1025 23:15:42.330816       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I1025 23:15:42.330890       1 event.go:294] "Event occurred" object="running-upgrade-023000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-023000 event: Registered Node running-upgrade-023000 in Controller"
	I1025 23:15:42.378581       1 shared_informer.go:262] Caches are synced for persistent volume
	I1025 23:15:42.380952       1 shared_informer.go:262] Caches are synced for expand
	I1025 23:15:42.382967       1 shared_informer.go:262] Caches are synced for PV protection
	I1025 23:15:42.405410       1 shared_informer.go:262] Caches are synced for resource quota
	I1025 23:15:42.430727       1 shared_informer.go:262] Caches are synced for attach detach
	I1025 23:15:42.435464       1 shared_informer.go:262] Caches are synced for resource quota
	I1025 23:15:42.736392       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-hpdm2"
	I1025 23:15:42.850211       1 shared_informer.go:262] Caches are synced for garbage collector
	I1025 23:15:42.850226       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1025 23:15:42.855556       1 shared_informer.go:262] Caches are synced for garbage collector
	I1025 23:15:43.033665       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I1025 23:15:43.233921       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-d4bkr"
	I1025 23:15:43.236894       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-5dx4q"
	
	
	==> kube-proxy [676e38c84e0a] <==
	I1025 23:15:43.958521       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I1025 23:15:43.958678       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I1025 23:15:43.958716       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1025 23:15:43.974508       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1025 23:15:43.974541       1 server_others.go:206] "Using iptables Proxier"
	I1025 23:15:43.974565       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1025 23:15:43.974687       1 server.go:661] "Version info" version="v1.24.1"
	I1025 23:15:43.974694       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 23:15:43.974997       1 config.go:317] "Starting service config controller"
	I1025 23:15:43.975003       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1025 23:15:43.975011       1 config.go:226] "Starting endpoint slice config controller"
	I1025 23:15:43.975013       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1025 23:15:43.975801       1 config.go:444] "Starting node config controller"
	I1025 23:15:43.975823       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1025 23:15:44.075474       1 shared_informer.go:262] Caches are synced for service config
	I1025 23:15:44.075502       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1025 23:15:44.075938       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [c3d4f5a54d4c] <==
	W1025 23:15:27.304933       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1025 23:15:27.305868       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1025 23:15:27.304948       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1025 23:15:27.305880       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1025 23:15:27.304963       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1025 23:15:27.305892       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1025 23:15:27.304976       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1025 23:15:27.305926       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1025 23:15:27.304987       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1025 23:15:27.305956       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1025 23:15:27.305001       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1025 23:15:27.305968       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1025 23:15:27.305014       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1025 23:15:27.305999       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1025 23:15:28.217971       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1025 23:15:28.218048       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1025 23:15:28.217998       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1025 23:15:28.218084       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1025 23:15:28.226382       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1025 23:15:28.226431       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1025 23:15:28.329475       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1025 23:15:28.329492       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1025 23:15:28.396837       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1025 23:15:28.396930       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1025 23:15:30.302211       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Fri 2024-10-25 23:10:40 UTC, ends at Fri 2024-10-25 23:19:47 UTC. --
	Oct 25 23:15:32 running-upgrade-023000 kubelet[12715]: E1025 23:15:32.254612   12715 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-023000\" already exists" pod="kube-system/etcd-running-upgrade-023000"
	Oct 25 23:15:42 running-upgrade-023000 kubelet[12715]: I1025 23:15:42.249123   12715 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 25 23:15:42 running-upgrade-023000 kubelet[12715]: I1025 23:15:42.249431   12715 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 25 23:15:42 running-upgrade-023000 kubelet[12715]: I1025 23:15:42.336114   12715 topology_manager.go:200] "Topology Admit Handler"
	Oct 25 23:15:42 running-upgrade-023000 kubelet[12715]: I1025 23:15:42.450425   12715 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bbfad4e0-bd4d-45f7-af26-d6f420076718-tmp\") pod \"storage-provisioner\" (UID: \"bbfad4e0-bd4d-45f7-af26-d6f420076718\") " pod="kube-system/storage-provisioner"
	Oct 25 23:15:42 running-upgrade-023000 kubelet[12715]: I1025 23:15:42.450457   12715 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2n2l\" (UniqueName: \"kubernetes.io/projected/bbfad4e0-bd4d-45f7-af26-d6f420076718-kube-api-access-k2n2l\") pod \"storage-provisioner\" (UID: \"bbfad4e0-bd4d-45f7-af26-d6f420076718\") " pod="kube-system/storage-provisioner"
	Oct 25 23:15:42 running-upgrade-023000 kubelet[12715]: E1025 23:15:42.554888   12715 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 25 23:15:42 running-upgrade-023000 kubelet[12715]: E1025 23:15:42.554910   12715 projected.go:192] Error preparing data for projected volume kube-api-access-k2n2l for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Oct 25 23:15:42 running-upgrade-023000 kubelet[12715]: E1025 23:15:42.554946   12715 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/bbfad4e0-bd4d-45f7-af26-d6f420076718-kube-api-access-k2n2l podName:bbfad4e0-bd4d-45f7-af26-d6f420076718 nodeName:}" failed. No retries permitted until 2024-10-25 23:15:43.054932592 +0000 UTC m=+13.045392000 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-k2n2l" (UniqueName: "kubernetes.io/projected/bbfad4e0-bd4d-45f7-af26-d6f420076718-kube-api-access-k2n2l") pod "storage-provisioner" (UID: "bbfad4e0-bd4d-45f7-af26-d6f420076718") : configmap "kube-root-ca.crt" not found
	Oct 25 23:15:42 running-upgrade-023000 kubelet[12715]: I1025 23:15:42.740260   12715 topology_manager.go:200] "Topology Admit Handler"
	Oct 25 23:15:42 running-upgrade-023000 kubelet[12715]: I1025 23:15:42.852752   12715 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8a19ea50-3489-4f40-aee0-48ef5e82ca61-kube-proxy\") pod \"kube-proxy-hpdm2\" (UID: \"8a19ea50-3489-4f40-aee0-48ef5e82ca61\") " pod="kube-system/kube-proxy-hpdm2"
	Oct 25 23:15:42 running-upgrade-023000 kubelet[12715]: I1025 23:15:42.852901   12715 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a19ea50-3489-4f40-aee0-48ef5e82ca61-xtables-lock\") pod \"kube-proxy-hpdm2\" (UID: \"8a19ea50-3489-4f40-aee0-48ef5e82ca61\") " pod="kube-system/kube-proxy-hpdm2"
	Oct 25 23:15:42 running-upgrade-023000 kubelet[12715]: I1025 23:15:42.852924   12715 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a19ea50-3489-4f40-aee0-48ef5e82ca61-lib-modules\") pod \"kube-proxy-hpdm2\" (UID: \"8a19ea50-3489-4f40-aee0-48ef5e82ca61\") " pod="kube-system/kube-proxy-hpdm2"
	Oct 25 23:15:42 running-upgrade-023000 kubelet[12715]: I1025 23:15:42.852939   12715 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h5z7\" (UniqueName: \"kubernetes.io/projected/8a19ea50-3489-4f40-aee0-48ef5e82ca61-kube-api-access-8h5z7\") pod \"kube-proxy-hpdm2\" (UID: \"8a19ea50-3489-4f40-aee0-48ef5e82ca61\") " pod="kube-system/kube-proxy-hpdm2"
	Oct 25 23:15:42 running-upgrade-023000 kubelet[12715]: E1025 23:15:42.957499   12715 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 25 23:15:42 running-upgrade-023000 kubelet[12715]: E1025 23:15:42.957524   12715 projected.go:192] Error preparing data for projected volume kube-api-access-8h5z7 for pod kube-system/kube-proxy-hpdm2: configmap "kube-root-ca.crt" not found
	Oct 25 23:15:42 running-upgrade-023000 kubelet[12715]: E1025 23:15:42.957554   12715 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/8a19ea50-3489-4f40-aee0-48ef5e82ca61-kube-api-access-8h5z7 podName:8a19ea50-3489-4f40-aee0-48ef5e82ca61 nodeName:}" failed. No retries permitted until 2024-10-25 23:15:43.457543267 +0000 UTC m=+13.448002676 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8h5z7" (UniqueName: "kubernetes.io/projected/8a19ea50-3489-4f40-aee0-48ef5e82ca61-kube-api-access-8h5z7") pod "kube-proxy-hpdm2" (UID: "8a19ea50-3489-4f40-aee0-48ef5e82ca61") : configmap "kube-root-ca.crt" not found
	Oct 25 23:15:43 running-upgrade-023000 kubelet[12715]: I1025 23:15:43.236761   12715 topology_manager.go:200] "Topology Admit Handler"
	Oct 25 23:15:43 running-upgrade-023000 kubelet[12715]: I1025 23:15:43.240864   12715 topology_manager.go:200] "Topology Admit Handler"
	Oct 25 23:15:43 running-upgrade-023000 kubelet[12715]: I1025 23:15:43.357144   12715 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hqzv\" (UniqueName: \"kubernetes.io/projected/a4297752-3ac0-4ff7-9251-4a80b1a9f3f3-kube-api-access-5hqzv\") pod \"coredns-6d4b75cb6d-5dx4q\" (UID: \"a4297752-3ac0-4ff7-9251-4a80b1a9f3f3\") " pod="kube-system/coredns-6d4b75cb6d-5dx4q"
	Oct 25 23:15:43 running-upgrade-023000 kubelet[12715]: I1025 23:15:43.357178   12715 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rms9q\" (UniqueName: \"kubernetes.io/projected/88336718-f439-4b12-b555-f464ce5dec2c-kube-api-access-rms9q\") pod \"coredns-6d4b75cb6d-d4bkr\" (UID: \"88336718-f439-4b12-b555-f464ce5dec2c\") " pod="kube-system/coredns-6d4b75cb6d-d4bkr"
	Oct 25 23:15:43 running-upgrade-023000 kubelet[12715]: I1025 23:15:43.357212   12715 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88336718-f439-4b12-b555-f464ce5dec2c-config-volume\") pod \"coredns-6d4b75cb6d-d4bkr\" (UID: \"88336718-f439-4b12-b555-f464ce5dec2c\") " pod="kube-system/coredns-6d4b75cb6d-d4bkr"
	Oct 25 23:15:43 running-upgrade-023000 kubelet[12715]: I1025 23:15:43.357240   12715 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4297752-3ac0-4ff7-9251-4a80b1a9f3f3-config-volume\") pod \"coredns-6d4b75cb6d-5dx4q\" (UID: \"a4297752-3ac0-4ff7-9251-4a80b1a9f3f3\") " pod="kube-system/coredns-6d4b75cb6d-5dx4q"
	Oct 25 23:19:21 running-upgrade-023000 kubelet[12715]: I1025 23:19:21.551351   12715 scope.go:110] "RemoveContainer" containerID="e4a8eaea175253afea9cf0b5fdaeedc5b7563bb18dae865c0bf3d5b95c57ea62"
	Oct 25 23:19:21 running-upgrade-023000 kubelet[12715]: I1025 23:19:21.562571   12715 scope.go:110] "RemoveContainer" containerID="24408302c4295ee12652abfda02c0a7209df8734a3b2bf35216474b840c9d90b"
	
	
	==> storage-provisioner [79775a806b06] <==
	I1025 23:15:43.416369       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 23:15:43.420560       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 23:15:43.420576       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1025 23:15:43.425479       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 23:15:43.425566       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-023000_2edb965d-c85d-454f-b792-e378794a50e5!
	I1025 23:15:43.425984       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f962f4d6-9d5f-435f-b1e2-435026460206", APIVersion:"v1", ResourceVersion:"373", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-023000_2edb965d-c85d-454f-b792-e378794a50e5 became leader
	I1025 23:15:43.526826       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-023000_2edb965d-c85d-454f-b792-e378794a50e5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-023000 -n running-upgrade-023000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-023000 -n running-upgrade-023000: exit status 2 (15.570490916s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-023000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-023000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-023000
--- FAIL: TestRunningBinaryUpgrade (588.64s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.47s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-410000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-410000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.835758709s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-410000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-410000" primary control-plane node in "kubernetes-upgrade-410000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-410000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:13:15.451619   13037 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:13:15.451774   13037 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:13:15.451778   13037 out.go:358] Setting ErrFile to fd 2...
	I1025 16:13:15.451780   13037 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:13:15.451907   13037 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:13:15.453100   13037 out.go:352] Setting JSON to false
	I1025 16:13:15.470858   13037 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7233,"bootTime":1729890762,"procs":515,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 16:13:15.470929   13037 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 16:13:15.477377   13037 out.go:177] * [kubernetes-upgrade-410000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 16:13:15.484340   13037 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 16:13:15.484383   13037 notify.go:220] Checking for updates...
	I1025 16:13:15.490339   13037 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:13:15.493360   13037 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 16:13:15.496425   13037 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 16:13:15.499299   13037 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	I1025 16:13:15.502312   13037 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 16:13:15.505805   13037 config.go:182] Loaded profile config "multinode-747000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:13:15.505884   13037 config.go:182] Loaded profile config "running-upgrade-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 16:13:15.505922   13037 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 16:13:15.509299   13037 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 16:13:15.516382   13037 start.go:297] selected driver: qemu2
	I1025 16:13:15.516390   13037 start.go:901] validating driver "qemu2" against <nil>
	I1025 16:13:15.516399   13037 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 16:13:15.518864   13037 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 16:13:15.521266   13037 out.go:177] * Automatically selected the socket_vmnet network
	I1025 16:13:15.524449   13037 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 16:13:15.524479   13037 cni.go:84] Creating CNI manager for ""
	I1025 16:13:15.524502   13037 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1025 16:13:15.524529   13037 start.go:340] cluster config:
	{Name:kubernetes-upgrade-410000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-410000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:13:15.529049   13037 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:13:15.537355   13037 out.go:177] * Starting "kubernetes-upgrade-410000" primary control-plane node in "kubernetes-upgrade-410000" cluster
	I1025 16:13:15.541317   13037 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1025 16:13:15.541333   13037 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1025 16:13:15.541342   13037 cache.go:56] Caching tarball of preloaded images
	I1025 16:13:15.541414   13037 preload.go:172] Found /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 16:13:15.541419   13037 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1025 16:13:15.541478   13037 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/kubernetes-upgrade-410000/config.json ...
	I1025 16:13:15.541488   13037 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/kubernetes-upgrade-410000/config.json: {Name:mkf58568bc85ec9b8a22af4bd6c66cce8c8ace6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:13:15.541793   13037 start.go:360] acquireMachinesLock for kubernetes-upgrade-410000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:13:15.541835   13037 start.go:364] duration metric: took 36.25µs to acquireMachinesLock for "kubernetes-upgrade-410000"
	I1025 16:13:15.541845   13037 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-410000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-410000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:13:15.541865   13037 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:13:15.545377   13037 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 16:13:15.571463   13037 start.go:159] libmachine.API.Create for "kubernetes-upgrade-410000" (driver="qemu2")
	I1025 16:13:15.571495   13037 client.go:168] LocalClient.Create starting
	I1025 16:13:15.571571   13037 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:13:15.571612   13037 main.go:141] libmachine: Decoding PEM data...
	I1025 16:13:15.571622   13037 main.go:141] libmachine: Parsing certificate...
	I1025 16:13:15.571659   13037 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:13:15.571688   13037 main.go:141] libmachine: Decoding PEM data...
	I1025 16:13:15.571696   13037 main.go:141] libmachine: Parsing certificate...
	I1025 16:13:15.572049   13037 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:13:15.740956   13037 main.go:141] libmachine: Creating SSH key...
	I1025 16:13:15.859990   13037 main.go:141] libmachine: Creating Disk image...
	I1025 16:13:15.859998   13037 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:13:15.860231   13037 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubernetes-upgrade-410000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubernetes-upgrade-410000/disk.qcow2
	I1025 16:13:15.872315   13037 main.go:141] libmachine: STDOUT: 
	I1025 16:13:15.872336   13037 main.go:141] libmachine: STDERR: 
	I1025 16:13:15.872396   13037 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubernetes-upgrade-410000/disk.qcow2 +20000M
	I1025 16:13:15.880908   13037 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:13:15.880925   13037 main.go:141] libmachine: STDERR: 
	I1025 16:13:15.880941   13037 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubernetes-upgrade-410000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubernetes-upgrade-410000/disk.qcow2
	I1025 16:13:15.880947   13037 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:13:15.880959   13037 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:13:15.880991   13037 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubernetes-upgrade-410000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubernetes-upgrade-410000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubernetes-upgrade-410000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:eb:d6:ad:93:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubernetes-upgrade-410000/disk.qcow2
	I1025 16:13:15.882870   13037 main.go:141] libmachine: STDOUT: 
	I1025 16:13:15.882885   13037 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:13:15.882907   13037 client.go:171] duration metric: took 311.408208ms to LocalClient.Create
	I1025 16:13:17.885119   13037 start.go:128] duration metric: took 2.343244375s to createHost
	I1025 16:13:17.885193   13037 start.go:83] releasing machines lock for "kubernetes-upgrade-410000", held for 2.343365583s
	W1025 16:13:17.885241   13037 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:13:17.899127   13037 out.go:177] * Deleting "kubernetes-upgrade-410000" in qemu2 ...
	W1025 16:13:17.922179   13037 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:13:17.922202   13037 start.go:729] Will try again in 5 seconds ...
	I1025 16:13:22.924400   13037 start.go:360] acquireMachinesLock for kubernetes-upgrade-410000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:13:22.924901   13037 start.go:364] duration metric: took 417.167µs to acquireMachinesLock for "kubernetes-upgrade-410000"
	I1025 16:13:22.925035   13037 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-410000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-410000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:13:22.925221   13037 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:13:22.934753   13037 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 16:13:22.972736   13037 start.go:159] libmachine.API.Create for "kubernetes-upgrade-410000" (driver="qemu2")
	I1025 16:13:22.972799   13037 client.go:168] LocalClient.Create starting
	I1025 16:13:22.972940   13037 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:13:22.973021   13037 main.go:141] libmachine: Decoding PEM data...
	I1025 16:13:22.973040   13037 main.go:141] libmachine: Parsing certificate...
	I1025 16:13:22.973119   13037 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:13:22.973170   13037 main.go:141] libmachine: Decoding PEM data...
	I1025 16:13:22.973186   13037 main.go:141] libmachine: Parsing certificate...
	I1025 16:13:22.973789   13037 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:13:23.140353   13037 main.go:141] libmachine: Creating SSH key...
	I1025 16:13:23.188418   13037 main.go:141] libmachine: Creating Disk image...
	I1025 16:13:23.188426   13037 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:13:23.188629   13037 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubernetes-upgrade-410000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubernetes-upgrade-410000/disk.qcow2
	I1025 16:13:23.198627   13037 main.go:141] libmachine: STDOUT: 
	I1025 16:13:23.198647   13037 main.go:141] libmachine: STDERR: 
	I1025 16:13:23.198704   13037 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubernetes-upgrade-410000/disk.qcow2 +20000M
	I1025 16:13:23.207340   13037 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:13:23.207361   13037 main.go:141] libmachine: STDERR: 
	I1025 16:13:23.207376   13037 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubernetes-upgrade-410000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubernetes-upgrade-410000/disk.qcow2
	I1025 16:13:23.207384   13037 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:13:23.207395   13037 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:13:23.207427   13037 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubernetes-upgrade-410000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubernetes-upgrade-410000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubernetes-upgrade-410000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:d9:1f:10:01:59 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubernetes-upgrade-410000/disk.qcow2
	I1025 16:13:23.209378   13037 main.go:141] libmachine: STDOUT: 
	I1025 16:13:23.209393   13037 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:13:23.209408   13037 client.go:171] duration metric: took 236.605209ms to LocalClient.Create
	I1025 16:13:25.211608   13037 start.go:128] duration metric: took 2.2863635s to createHost
	I1025 16:13:25.211682   13037 start.go:83] releasing machines lock for "kubernetes-upgrade-410000", held for 2.2867765s
	W1025 16:13:25.212082   13037 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-410000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-410000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:13:25.221798   13037 out.go:201] 
	W1025 16:13:25.228095   13037 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:13:25.228123   13037 out.go:270] * 
	* 
	W1025 16:13:25.230762   13037 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 16:13:25.239819   13037 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-410000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-410000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-410000: (3.245629458s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-410000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-410000 status --format={{.Host}}: exit status 7 (69.638333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-410000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-410000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.179442209s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-410000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-410000" primary control-plane node in "kubernetes-upgrade-410000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-410000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-410000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:13:28.606973   13071 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:13:28.607127   13071 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:13:28.607130   13071 out.go:358] Setting ErrFile to fd 2...
	I1025 16:13:28.607132   13071 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:13:28.607300   13071 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:13:28.608342   13071 out.go:352] Setting JSON to false
	I1025 16:13:28.627287   13071 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7246,"bootTime":1729890762,"procs":515,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 16:13:28.627362   13071 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 16:13:28.632893   13071 out.go:177] * [kubernetes-upgrade-410000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 16:13:28.640821   13071 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 16:13:28.640882   13071 notify.go:220] Checking for updates...
	I1025 16:13:28.648700   13071 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:13:28.651863   13071 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 16:13:28.654858   13071 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 16:13:28.657854   13071 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	I1025 16:13:28.660811   13071 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 16:13:28.664195   13071 config.go:182] Loaded profile config "kubernetes-upgrade-410000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1025 16:13:28.664481   13071 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 16:13:28.667861   13071 out.go:177] * Using the qemu2 driver based on existing profile
	I1025 16:13:28.674793   13071 start.go:297] selected driver: qemu2
	I1025 16:13:28.674801   13071 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-410000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-410000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:13:28.674853   13071 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 16:13:28.677768   13071 cni.go:84] Creating CNI manager for ""
	I1025 16:13:28.677810   13071 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 16:13:28.677835   13071 start.go:340] cluster config:
	{Name:kubernetes-upgrade-410000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-410000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:13:28.682250   13071 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:13:28.690840   13071 out.go:177] * Starting "kubernetes-upgrade-410000" primary control-plane node in "kubernetes-upgrade-410000" cluster
	I1025 16:13:28.694839   13071 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1025 16:13:28.694855   13071 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1025 16:13:28.694863   13071 cache.go:56] Caching tarball of preloaded images
	I1025 16:13:28.694937   13071 preload.go:172] Found /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 16:13:28.694943   13071 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1025 16:13:28.695001   13071 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/kubernetes-upgrade-410000/config.json ...
	I1025 16:13:28.695485   13071 start.go:360] acquireMachinesLock for kubernetes-upgrade-410000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:13:28.695517   13071 start.go:364] duration metric: took 25.833µs to acquireMachinesLock for "kubernetes-upgrade-410000"
	I1025 16:13:28.695526   13071 start.go:96] Skipping create...Using existing machine configuration
	I1025 16:13:28.695532   13071 fix.go:54] fixHost starting: 
	I1025 16:13:28.695667   13071 fix.go:112] recreateIfNeeded on kubernetes-upgrade-410000: state=Stopped err=<nil>
	W1025 16:13:28.695676   13071 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 16:13:28.702850   13071 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-410000" ...
	I1025 16:13:28.706793   13071 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:13:28.706824   13071 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubernetes-upgrade-410000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubernetes-upgrade-410000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubernetes-upgrade-410000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:d9:1f:10:01:59 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubernetes-upgrade-410000/disk.qcow2
	I1025 16:13:28.708897   13071 main.go:141] libmachine: STDOUT: 
	I1025 16:13:28.708922   13071 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:13:28.708949   13071 fix.go:56] duration metric: took 13.417792ms for fixHost
	I1025 16:13:28.708954   13071 start.go:83] releasing machines lock for "kubernetes-upgrade-410000", held for 13.43225ms
	W1025 16:13:28.708959   13071 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:13:28.708997   13071 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:13:28.709001   13071 start.go:729] Will try again in 5 seconds ...
	I1025 16:13:33.711016   13071 start.go:360] acquireMachinesLock for kubernetes-upgrade-410000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:13:33.711089   13071 start.go:364] duration metric: took 54.208µs to acquireMachinesLock for "kubernetes-upgrade-410000"
	I1025 16:13:33.711110   13071 start.go:96] Skipping create...Using existing machine configuration
	I1025 16:13:33.711114   13071 fix.go:54] fixHost starting: 
	I1025 16:13:33.711267   13071 fix.go:112] recreateIfNeeded on kubernetes-upgrade-410000: state=Stopped err=<nil>
	W1025 16:13:33.711272   13071 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 16:13:33.717398   13071 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-410000" ...
	I1025 16:13:33.721438   13071 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:13:33.721475   13071 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubernetes-upgrade-410000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubernetes-upgrade-410000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubernetes-upgrade-410000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:d9:1f:10:01:59 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubernetes-upgrade-410000/disk.qcow2
	I1025 16:13:33.723561   13071 main.go:141] libmachine: STDOUT: 
	I1025 16:13:33.723576   13071 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:13:33.723593   13071 fix.go:56] duration metric: took 12.47875ms for fixHost
	I1025 16:13:33.723603   13071 start.go:83] releasing machines lock for "kubernetes-upgrade-410000", held for 12.50325ms
	W1025 16:13:33.723654   13071 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-410000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-410000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:13:33.730417   13071 out.go:201] 
	W1025 16:13:33.733430   13071 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:13:33.733437   13071 out.go:270] * 
	* 
	W1025 16:13:33.733859   13071 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 16:13:33.744324   13071 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-410000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-410000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-410000 version --output=json: exit status 1 (27.667375ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-410000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-10-25 16:13:33.780541 -0700 PDT m=+937.038742293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-410000 -n kubernetes-upgrade-410000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-410000 -n kubernetes-upgrade-410000: exit status 7 (34.17825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-410000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-410000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-410000
--- FAIL: TestKubernetesUpgrade (18.47s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (0.92s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19758
- KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current930648221/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (0.92s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (0.96s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19758
- KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2429398191/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (0.96s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (574.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2586460860 start -p stopped-upgrade-782000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2586460860 start -p stopped-upgrade-782000 --memory=2200 --vm-driver=qemu2 : (41.082409375s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2586460860 -p stopped-upgrade-782000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2586460860 -p stopped-upgrade-782000 stop: (12.115059042s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-782000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-782000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.457821542s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-782000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-782000" primary control-plane node in "stopped-upgrade-782000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-782000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:14:28.282945   13110 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:14:28.283124   13110 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:14:28.283128   13110 out.go:358] Setting ErrFile to fd 2...
	I1025 16:14:28.283131   13110 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:14:28.283271   13110 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:14:28.284560   13110 out.go:352] Setting JSON to false
	I1025 16:14:28.304950   13110 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7306,"bootTime":1729890762,"procs":516,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 16:14:28.305042   13110 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 16:14:28.309767   13110 out.go:177] * [stopped-upgrade-782000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 16:14:28.318838   13110 notify.go:220] Checking for updates...
	I1025 16:14:28.322674   13110 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 16:14:28.325708   13110 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:14:28.328744   13110 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 16:14:28.331679   13110 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 16:14:28.338771   13110 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	I1025 16:14:28.342675   13110 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 16:14:28.346935   13110 config.go:182] Loaded profile config "stopped-upgrade-782000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 16:14:28.350531   13110 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1025 16:14:28.353794   13110 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 16:14:28.356716   13110 out.go:177] * Using the qemu2 driver based on existing profile
	I1025 16:14:28.364707   13110 start.go:297] selected driver: qemu2
	I1025 16:14:28.364713   13110 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-782000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:62397 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1025 16:14:28.364768   13110 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 16:14:28.367437   13110 cni.go:84] Creating CNI manager for ""
	I1025 16:14:28.367465   13110 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 16:14:28.367487   13110 start.go:340] cluster config:
	{Name:stopped-upgrade-782000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:62397 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1025 16:14:28.367542   13110 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:14:28.375669   13110 out.go:177] * Starting "stopped-upgrade-782000" primary control-plane node in "stopped-upgrade-782000" cluster
	I1025 16:14:28.379721   13110 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1025 16:14:28.379741   13110 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1025 16:14:28.379750   13110 cache.go:56] Caching tarball of preloaded images
	I1025 16:14:28.379828   13110 preload.go:172] Found /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 16:14:28.379838   13110 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1025 16:14:28.379878   13110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/config.json ...
	I1025 16:14:28.380295   13110 start.go:360] acquireMachinesLock for stopped-upgrade-782000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:14:28.380339   13110 start.go:364] duration metric: took 37.958µs to acquireMachinesLock for "stopped-upgrade-782000"
	I1025 16:14:28.380346   13110 start.go:96] Skipping create...Using existing machine configuration
	I1025 16:14:28.380351   13110 fix.go:54] fixHost starting: 
	I1025 16:14:28.380448   13110 fix.go:112] recreateIfNeeded on stopped-upgrade-782000: state=Stopped err=<nil>
	W1025 16:14:28.380456   13110 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 16:14:28.384693   13110 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-782000" ...
	I1025 16:14:28.392645   13110 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:14:28.392714   13110 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/stopped-upgrade-782000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/stopped-upgrade-782000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/stopped-upgrade-782000/qemu.pid -nic user,model=virtio,hostfwd=tcp::62363-:22,hostfwd=tcp::62364-:2376,hostname=stopped-upgrade-782000 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/stopped-upgrade-782000/disk.qcow2
	I1025 16:14:28.439013   13110 main.go:141] libmachine: STDOUT: 
	I1025 16:14:28.439042   13110 main.go:141] libmachine: STDERR: 
	I1025 16:14:28.439048   13110 main.go:141] libmachine: Waiting for VM to start (ssh -p 62363 docker@127.0.0.1)...
	I1025 16:14:48.332756   13110 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/config.json ...
	I1025 16:14:48.333776   13110 machine.go:93] provisionDockerMachine start ...
	I1025 16:14:48.334213   13110 main.go:141] libmachine: Using SSH client type: native
	I1025 16:14:48.334633   13110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105026480] 0x105028cc0 <nil>  [] 0s} localhost 62363 <nil> <nil>}
	I1025 16:14:48.334647   13110 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 16:14:48.423239   13110 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1025 16:14:48.423262   13110 buildroot.go:166] provisioning hostname "stopped-upgrade-782000"
	I1025 16:14:48.423362   13110 main.go:141] libmachine: Using SSH client type: native
	I1025 16:14:48.423546   13110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105026480] 0x105028cc0 <nil>  [] 0s} localhost 62363 <nil> <nil>}
	I1025 16:14:48.423558   13110 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-782000 && echo "stopped-upgrade-782000" | sudo tee /etc/hostname
	I1025 16:14:48.503656   13110 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-782000
	
	I1025 16:14:48.503732   13110 main.go:141] libmachine: Using SSH client type: native
	I1025 16:14:48.503863   13110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105026480] 0x105028cc0 <nil>  [] 0s} localhost 62363 <nil> <nil>}
	I1025 16:14:48.503876   13110 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-782000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-782000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-782000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 16:14:48.576306   13110 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 16:14:48.576319   13110 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19758-10490/.minikube CaCertPath:/Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19758-10490/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19758-10490/.minikube}
	I1025 16:14:48.576330   13110 buildroot.go:174] setting up certificates
	I1025 16:14:48.576335   13110 provision.go:84] configureAuth start
	I1025 16:14:48.576343   13110 provision.go:143] copyHostCerts
	I1025 16:14:48.576412   13110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19758-10490/.minikube/ca.pem, removing ...
	I1025 16:14:48.576418   13110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19758-10490/.minikube/ca.pem
	I1025 16:14:48.576533   13110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19758-10490/.minikube/ca.pem (1078 bytes)
	I1025 16:14:48.576740   13110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19758-10490/.minikube/cert.pem, removing ...
	I1025 16:14:48.576746   13110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19758-10490/.minikube/cert.pem
	I1025 16:14:48.576797   13110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19758-10490/.minikube/cert.pem (1123 bytes)
	I1025 16:14:48.576931   13110 exec_runner.go:144] found /Users/jenkins/minikube-integration/19758-10490/.minikube/key.pem, removing ...
	I1025 16:14:48.576935   13110 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19758-10490/.minikube/key.pem
	I1025 16:14:48.576977   13110 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19758-10490/.minikube/key.pem (1675 bytes)
	I1025 16:14:48.577088   13110 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-782000 san=[127.0.0.1 localhost minikube stopped-upgrade-782000]
	I1025 16:14:48.667891   13110 provision.go:177] copyRemoteCerts
	I1025 16:14:48.667939   13110 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 16:14:48.667946   13110 sshutil.go:53] new ssh client: &{IP:localhost Port:62363 SSHKeyPath:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/stopped-upgrade-782000/id_rsa Username:docker}
	I1025 16:14:48.704330   13110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1025 16:14:48.711701   13110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1025 16:14:48.718943   13110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 16:14:48.725860   13110 provision.go:87] duration metric: took 149.518083ms to configureAuth
	I1025 16:14:48.725870   13110 buildroot.go:189] setting minikube options for container-runtime
	I1025 16:14:48.725987   13110 config.go:182] Loaded profile config "stopped-upgrade-782000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 16:14:48.726038   13110 main.go:141] libmachine: Using SSH client type: native
	I1025 16:14:48.726124   13110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105026480] 0x105028cc0 <nil>  [] 0s} localhost 62363 <nil> <nil>}
	I1025 16:14:48.726129   13110 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 16:14:48.795326   13110 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1025 16:14:48.795335   13110 buildroot.go:70] root file system type: tmpfs
	I1025 16:14:48.795389   13110 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 16:14:48.795448   13110 main.go:141] libmachine: Using SSH client type: native
	I1025 16:14:48.795556   13110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105026480] 0x105028cc0 <nil>  [] 0s} localhost 62363 <nil> <nil>}
	I1025 16:14:48.795592   13110 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 16:14:48.867380   13110 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 16:14:48.867447   13110 main.go:141] libmachine: Using SSH client type: native
	I1025 16:14:48.867560   13110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105026480] 0x105028cc0 <nil>  [] 0s} localhost 62363 <nil> <nil>}
	I1025 16:14:48.867569   13110 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 16:14:49.264586   13110 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1025 16:14:49.264599   13110 machine.go:96] duration metric: took 930.818541ms to provisionDockerMachine
	I1025 16:14:49.264606   13110 start.go:293] postStartSetup for "stopped-upgrade-782000" (driver="qemu2")
	I1025 16:14:49.264613   13110 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 16:14:49.264686   13110 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 16:14:49.264696   13110 sshutil.go:53] new ssh client: &{IP:localhost Port:62363 SSHKeyPath:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/stopped-upgrade-782000/id_rsa Username:docker}
	I1025 16:14:49.302669   13110 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 16:14:49.304287   13110 info.go:137] Remote host: Buildroot 2021.02.12
	I1025 16:14:49.304296   13110 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19758-10490/.minikube/addons for local assets ...
	I1025 16:14:49.304381   13110 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19758-10490/.minikube/files for local assets ...
	I1025 16:14:49.304476   13110 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19758-10490/.minikube/files/etc/ssl/certs/109982.pem -> 109982.pem in /etc/ssl/certs
	I1025 16:14:49.304584   13110 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 16:14:49.308409   13110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/files/etc/ssl/certs/109982.pem --> /etc/ssl/certs/109982.pem (1708 bytes)
	I1025 16:14:49.316278   13110 start.go:296] duration metric: took 51.664583ms for postStartSetup
	I1025 16:14:49.316298   13110 fix.go:56] duration metric: took 20.936093709s for fixHost
	I1025 16:14:49.316365   13110 main.go:141] libmachine: Using SSH client type: native
	I1025 16:14:49.316485   13110 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x105026480] 0x105028cc0 <nil>  [] 0s} localhost 62363 <nil> <nil>}
	I1025 16:14:49.316491   13110 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1025 16:14:49.385972   13110 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729898089.131940504
	
	I1025 16:14:49.385980   13110 fix.go:216] guest clock: 1729898089.131940504
	I1025 16:14:49.385988   13110 fix.go:229] Guest: 2024-10-25 16:14:49.131940504 -0700 PDT Remote: 2024-10-25 16:14:49.3163 -0700 PDT m=+21.067462959 (delta=-184.359496ms)
	I1025 16:14:49.386002   13110 fix.go:200] guest clock delta is within tolerance: -184.359496ms
	I1025 16:14:49.386004   13110 start.go:83] releasing machines lock for "stopped-upgrade-782000", held for 21.005807792s
	I1025 16:14:49.386073   13110 ssh_runner.go:195] Run: cat /version.json
	I1025 16:14:49.386076   13110 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 16:14:49.386081   13110 sshutil.go:53] new ssh client: &{IP:localhost Port:62363 SSHKeyPath:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/stopped-upgrade-782000/id_rsa Username:docker}
	I1025 16:14:49.386092   13110 sshutil.go:53] new ssh client: &{IP:localhost Port:62363 SSHKeyPath:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/stopped-upgrade-782000/id_rsa Username:docker}
	W1025 16:14:49.386666   13110 sshutil.go:64] dial failure (will retry): dial tcp [::1]:62363: connect: connection refused
	I1025 16:14:49.386695   13110 retry.go:31] will retry after 248.187236ms: dial tcp [::1]:62363: connect: connection refused
	W1025 16:14:49.678938   13110 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1025 16:14:49.679041   13110 ssh_runner.go:195] Run: systemctl --version
	I1025 16:14:49.681616   13110 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 16:14:49.683937   13110 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 16:14:49.683990   13110 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1025 16:14:49.688050   13110 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1025 16:14:49.693971   13110 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 16:14:49.693980   13110 start.go:495] detecting cgroup driver to use...
	I1025 16:14:49.694065   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 16:14:49.702045   13110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1025 16:14:49.705534   13110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1025 16:14:49.708527   13110 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1025 16:14:49.708559   13110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1025 16:14:49.711841   13110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 16:14:49.714997   13110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1025 16:14:49.717821   13110 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 16:14:49.720645   13110 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 16:14:49.723867   13110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1025 16:14:49.727240   13110 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1025 16:14:49.730185   13110 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1025 16:14:49.733320   13110 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 16:14:49.736240   13110 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 16:14:49.739468   13110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 16:14:49.820226   13110 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1025 16:14:49.827268   13110 start.go:495] detecting cgroup driver to use...
	I1025 16:14:49.827363   13110 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1025 16:14:49.832743   13110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 16:14:49.837902   13110 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 16:14:49.843474   13110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 16:14:49.848334   13110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 16:14:49.853295   13110 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1025 16:14:49.911662   13110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 16:14:49.916997   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 16:14:49.922492   13110 ssh_runner.go:195] Run: which cri-dockerd
	I1025 16:14:49.923748   13110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1025 16:14:49.927003   13110 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1025 16:14:49.932203   13110 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1025 16:14:50.024679   13110 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1025 16:14:50.093727   13110 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1025 16:14:50.093800   13110 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1025 16:14:50.099127   13110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 16:14:50.163392   13110 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 16:14:51.292203   13110 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.128790292s)
	I1025 16:14:51.292324   13110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1025 16:14:51.297957   13110 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1025 16:14:51.304736   13110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1025 16:14:51.310473   13110 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1025 16:14:51.392629   13110 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 16:14:51.472134   13110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 16:14:51.546027   13110 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1025 16:14:51.551975   13110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1025 16:14:51.556979   13110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 16:14:51.641670   13110 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1025 16:14:51.680099   13110 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1025 16:14:51.680203   13110 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1025 16:14:51.683775   13110 start.go:563] Will wait 60s for crictl version
	I1025 16:14:51.683843   13110 ssh_runner.go:195] Run: which crictl
	I1025 16:14:51.685295   13110 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 16:14:51.700818   13110 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1025 16:14:51.700902   13110 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 16:14:51.718073   13110 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 16:14:51.736912   13110 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1025 16:14:51.737092   13110 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1025 16:14:51.738341   13110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 16:14:51.741788   13110 kubeadm.go:883] updating cluster {Name:stopped-upgrade-782000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:62397 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1025 16:14:51.741831   13110 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1025 16:14:51.741878   13110 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 16:14:51.752499   13110 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 16:14:51.752509   13110 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1025 16:14:51.752567   13110 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1025 16:14:51.756096   13110 ssh_runner.go:195] Run: which lz4
	I1025 16:14:51.757483   13110 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1025 16:14:51.758631   13110 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1025 16:14:51.758642   13110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1025 16:14:52.667389   13110 docker.go:653] duration metric: took 909.965333ms to copy over tarball
	I1025 16:14:52.667463   13110 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1025 16:14:53.853951   13110 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.186481041s)
	I1025 16:14:53.853965   13110 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1025 16:14:53.869795   13110 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1025 16:14:53.872676   13110 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1025 16:14:53.877869   13110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 16:14:53.959990   13110 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 16:14:55.460674   13110 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.50067875s)
	I1025 16:14:55.460790   13110 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 16:14:55.471403   13110 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 16:14:55.471416   13110 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1025 16:14:55.471422   13110 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1025 16:14:55.475532   13110 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 16:14:55.477500   13110 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1025 16:14:55.479950   13110 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1025 16:14:55.480299   13110 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 16:14:55.482027   13110 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1025 16:14:55.482027   13110 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1025 16:14:55.483355   13110 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1025 16:14:55.483540   13110 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1025 16:14:55.484766   13110 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1025 16:14:55.485364   13110 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1025 16:14:55.485853   13110 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1025 16:14:55.485943   13110 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1025 16:14:55.486904   13110 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1025 16:14:55.487364   13110 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1025 16:14:55.487945   13110 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1025 16:14:55.488756   13110 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1025 16:14:56.003910   13110 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1025 16:14:56.015271   13110 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1025 16:14:56.015307   13110 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1025 16:14:56.015369   13110 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1025 16:14:56.025576   13110 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1025 16:14:56.051092   13110 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	W1025 16:14:56.052880   13110 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1025 16:14:56.053309   13110 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1025 16:14:56.063177   13110 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1025 16:14:56.063199   13110 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1025 16:14:56.063266   13110 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1025 16:14:56.069819   13110 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1025 16:14:56.069840   13110 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1025 16:14:56.069894   13110 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1025 16:14:56.081026   13110 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1025 16:14:56.082365   13110 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1025 16:14:56.082546   13110 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1025 16:14:56.084423   13110 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1025 16:14:56.084450   13110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1025 16:14:56.128950   13110 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1025 16:14:56.128964   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1025 16:14:56.143677   13110 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1025 16:14:56.172988   13110 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1025 16:14:56.173057   13110 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1025 16:14:56.173077   13110 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1025 16:14:56.173142   13110 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1025 16:14:56.176970   13110 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1025 16:14:56.182984   13110 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1025 16:14:56.192533   13110 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1025 16:14:56.192556   13110 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1025 16:14:56.192617   13110 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1025 16:14:56.202431   13110 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1025 16:14:56.231642   13110 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1025 16:14:56.242630   13110 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1025 16:14:56.242650   13110 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1025 16:14:56.242715   13110 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1025 16:14:56.253017   13110 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1025 16:14:56.323098   13110 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1025 16:14:56.333682   13110 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1025 16:14:56.333702   13110 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1025 16:14:56.333767   13110 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1025 16:14:56.344075   13110 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1025 16:14:56.344220   13110 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1025 16:14:56.345820   13110 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1025 16:14:56.345838   13110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1025 16:14:56.353412   13110 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1025 16:14:56.353422   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1025 16:14:56.379746   13110 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W1025 16:14:56.399369   13110 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1025 16:14:56.399533   13110 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 16:14:56.410099   13110 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1025 16:14:56.410122   13110 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 16:14:56.410189   13110 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 16:14:56.424039   13110 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1025 16:14:56.424182   13110 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1025 16:14:56.425638   13110 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1025 16:14:56.425651   13110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1025 16:14:56.455603   13110 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1025 16:14:56.455618   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1025 16:14:56.693941   13110 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1025 16:14:56.693985   13110 cache_images.go:92] duration metric: took 1.222564542s to LoadCachedImages
	W1025 16:14:56.694029   13110 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I1025 16:14:56.694037   13110 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1025 16:14:56.694092   13110 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-782000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 16:14:56.694164   13110 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1025 16:14:56.710243   13110 cni.go:84] Creating CNI manager for ""
	I1025 16:14:56.710263   13110 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 16:14:56.710274   13110 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1025 16:14:56.710285   13110 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-782000 NodeName:stopped-upgrade-782000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 16:14:56.710370   13110 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-782000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 16:14:56.710451   13110 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1025 16:14:56.713294   13110 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 16:14:56.713333   13110 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 16:14:56.716207   13110 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1025 16:14:56.721426   13110 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 16:14:56.726242   13110 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1025 16:14:56.731383   13110 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1025 16:14:56.732627   13110 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 16:14:56.736347   13110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 16:14:56.816349   13110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 16:14:56.822002   13110 certs.go:68] Setting up /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000 for IP: 10.0.2.15
	I1025 16:14:56.822012   13110 certs.go:194] generating shared ca certs ...
	I1025 16:14:56.822021   13110 certs.go:226] acquiring lock for ca certs: {Name:mk87b032e78a00eded37575daed7123f238f6628 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:14:56.822195   13110 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19758-10490/.minikube/ca.key
	I1025 16:14:56.822900   13110 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19758-10490/.minikube/proxy-client-ca.key
	I1025 16:14:56.822912   13110 certs.go:256] generating profile certs ...
	I1025 16:14:56.823110   13110 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/client.key
	I1025 16:14:56.823126   13110 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/apiserver.key.7d1b60be
	I1025 16:14:56.823138   13110 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/apiserver.crt.7d1b60be with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1025 16:14:56.866141   13110 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/apiserver.crt.7d1b60be ...
	I1025 16:14:56.866158   13110 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/apiserver.crt.7d1b60be: {Name:mk5d3a3941a8b7fcac917f24ade71303566e028d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:14:56.866720   13110 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/apiserver.key.7d1b60be ...
	I1025 16:14:56.866731   13110 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/apiserver.key.7d1b60be: {Name:mk95d51aefdc6fb2c116ec879843759c674e4078 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:14:56.866907   13110 certs.go:381] copying /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/apiserver.crt.7d1b60be -> /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/apiserver.crt
	I1025 16:14:56.867030   13110 certs.go:385] copying /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/apiserver.key.7d1b60be -> /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/apiserver.key
	I1025 16:14:56.867250   13110 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/proxy-client.key
	I1025 16:14:56.867400   13110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/10998.pem (1338 bytes)
	W1025 16:14:56.867558   13110 certs.go:480] ignoring /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/10998_empty.pem, impossibly tiny 0 bytes
	I1025 16:14:56.867565   13110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 16:14:56.867585   13110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem (1078 bytes)
	I1025 16:14:56.867606   13110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem (1123 bytes)
	I1025 16:14:56.867636   13110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/key.pem (1675 bytes)
	I1025 16:14:56.867678   13110 certs.go:484] found cert: /Users/jenkins/minikube-integration/19758-10490/.minikube/files/etc/ssl/certs/109982.pem (1708 bytes)
	I1025 16:14:56.868084   13110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 16:14:56.876351   13110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 16:14:56.883596   13110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 16:14:56.891388   13110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 16:14:56.898370   13110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1025 16:14:56.904902   13110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 16:14:56.911936   13110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 16:14:56.919643   13110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 16:14:56.926758   13110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 16:14:56.933633   13110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/10998.pem --> /usr/share/ca-certificates/10998.pem (1338 bytes)
	I1025 16:14:56.940531   13110 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19758-10490/.minikube/files/etc/ssl/certs/109982.pem --> /usr/share/ca-certificates/109982.pem (1708 bytes)
	I1025 16:14:56.947723   13110 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 16:14:56.953172   13110 ssh_runner.go:195] Run: openssl version
	I1025 16:14:56.955089   13110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 16:14:56.958042   13110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 16:14:56.959553   13110 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 23:10 /usr/share/ca-certificates/minikubeCA.pem
	I1025 16:14:56.959581   13110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 16:14:56.961479   13110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 16:14:56.964604   13110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10998.pem && ln -fs /usr/share/ca-certificates/10998.pem /etc/ssl/certs/10998.pem"
	I1025 16:14:56.968140   13110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10998.pem
	I1025 16:14:56.969745   13110 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 22:58 /usr/share/ca-certificates/10998.pem
	I1025 16:14:56.969772   13110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10998.pem
	I1025 16:14:56.971530   13110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10998.pem /etc/ssl/certs/51391683.0"
	I1025 16:14:56.974919   13110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/109982.pem && ln -fs /usr/share/ca-certificates/109982.pem /etc/ssl/certs/109982.pem"
	I1025 16:14:56.978030   13110 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/109982.pem
	I1025 16:14:56.979423   13110 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 22:58 /usr/share/ca-certificates/109982.pem
	I1025 16:14:56.979452   13110 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/109982.pem
	I1025 16:14:56.981265   13110 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/109982.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 16:14:56.984359   13110 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 16:14:56.985771   13110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 16:14:56.988355   13110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 16:14:56.990382   13110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 16:14:56.992556   13110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 16:14:56.994329   13110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 16:14:56.996096   13110 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 16:14:56.998034   13110 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-782000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:62397 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-782000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1025 16:14:56.998110   13110 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 16:14:57.008270   13110 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 16:14:57.011868   13110 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1025 16:14:57.011878   13110 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1025 16:14:57.011912   13110 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 16:14:57.015325   13110 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 16:14:57.015784   13110 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-782000" does not appear in /Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:14:57.015906   13110 kubeconfig.go:62] /Users/jenkins/minikube-integration/19758-10490/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-782000" cluster setting kubeconfig missing "stopped-upgrade-782000" context setting]
	I1025 16:14:57.016116   13110 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19758-10490/kubeconfig: {Name:mkab4c8ddad2dcb8cd5939090920ae3e3753785d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:14:57.016561   13110 kapi.go:59] client config for stopped-upgrade-782000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/client.key", CAFile:"/Users/jenkins/minikube-integration/19758-10490/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106a82510), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 16:14:57.017066   13110 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 16:14:57.019857   13110 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-782000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1025 16:14:57.019861   13110 kubeadm.go:1160] stopping kube-system containers ...
	I1025 16:14:57.019908   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 16:14:57.030686   13110 docker.go:483] Stopping containers: [7f47591e8309 8ef6282e225f 0f8b5253d658 50a050a9e75c 85a87d3c29bf fcc2487cc3e0 56ee9cb5f7b9 cc624b1f4264]
	I1025 16:14:57.030754   13110 ssh_runner.go:195] Run: docker stop 7f47591e8309 8ef6282e225f 0f8b5253d658 50a050a9e75c 85a87d3c29bf fcc2487cc3e0 56ee9cb5f7b9 cc624b1f4264
	I1025 16:14:57.046152   13110 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1025 16:14:57.051638   13110 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 16:14:57.054838   13110 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 16:14:57.054844   13110 kubeadm.go:157] found existing configuration files:
	
	I1025 16:14:57.054876   13110 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62397 /etc/kubernetes/admin.conf
	I1025 16:14:57.057535   13110 kubeadm.go:163] "https://control-plane.minikube.internal:62397" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:62397 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 16:14:57.057566   13110 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 16:14:57.060348   13110 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62397 /etc/kubernetes/kubelet.conf
	I1025 16:14:57.063384   13110 kubeadm.go:163] "https://control-plane.minikube.internal:62397" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:62397 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 16:14:57.063408   13110 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 16:14:57.066360   13110 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62397 /etc/kubernetes/controller-manager.conf
	I1025 16:14:57.068868   13110 kubeadm.go:163] "https://control-plane.minikube.internal:62397" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:62397 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 16:14:57.068891   13110 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 16:14:57.071802   13110 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62397 /etc/kubernetes/scheduler.conf
	I1025 16:14:57.074846   13110 kubeadm.go:163] "https://control-plane.minikube.internal:62397" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:62397 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 16:14:57.074876   13110 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 16:14:57.077298   13110 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 16:14:57.080269   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 16:14:57.103056   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 16:14:57.402795   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1025 16:14:57.535117   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 16:14:57.565675   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1025 16:14:57.589237   13110 api_server.go:52] waiting for apiserver process to appear ...
	I1025 16:14:57.589323   13110 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 16:14:58.091393   13110 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 16:14:58.591410   13110 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 16:14:58.602892   13110 api_server.go:72] duration metric: took 1.013660666s to wait for apiserver process to appear ...
	I1025 16:14:58.602911   13110 api_server.go:88] waiting for apiserver healthz status ...
	I1025 16:14:58.602929   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:15:03.605044   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:15:03.605115   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:15:08.605554   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:15:08.605614   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:15:13.606144   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:15:13.606209   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:15:18.607040   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:15:18.607091   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:15:23.608022   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:15:23.608048   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:15:28.609000   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:15:28.609021   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:15:33.610327   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:15:33.610370   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:15:38.610730   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:15:38.610750   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:15:43.612442   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:15:43.612466   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:15:48.612709   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:15:48.612752   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:15:53.615029   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:15:53.615072   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:15:58.617362   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:15:58.617555   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:15:58.638539   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:15:58.638645   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:15:58.653142   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:15:58.653234   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:15:58.666280   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:15:58.666362   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:15:58.676894   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:15:58.676975   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:15:58.687817   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:15:58.687906   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:15:58.698101   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:15:58.698183   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:15:58.708846   13110 logs.go:282] 0 containers: []
	W1025 16:15:58.708856   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:15:58.708920   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:15:58.719448   13110 logs.go:282] 0 containers: []
	W1025 16:15:58.719460   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:15:58.719470   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:15:58.719475   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:15:58.756977   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:15:58.756996   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:15:58.869128   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:15:58.869139   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:15:58.896208   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:15:58.896218   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:15:58.908361   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:15:58.908372   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:15:58.920365   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:15:58.920377   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:15:58.935148   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:15:58.935162   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:15:58.952550   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:15:58.952560   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:15:58.957227   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:15:58.957234   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:15:58.971521   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:15:58.971532   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:15:58.988759   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:15:58.988769   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:15:59.014813   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:15:59.014822   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:15:59.029215   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:15:59.029226   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:15:59.044472   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:15:59.044483   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:15:59.055649   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:15:59.055659   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:16:01.569188   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:16:06.571414   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:16:06.571588   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:16:06.582695   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:16:06.582789   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:16:06.593267   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:16:06.593350   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:16:06.603884   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:16:06.603961   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:16:06.615625   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:16:06.615711   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:16:06.631307   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:16:06.631393   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:16:06.641985   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:16:06.642065   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:16:06.652426   13110 logs.go:282] 0 containers: []
	W1025 16:16:06.652439   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:16:06.652505   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:16:06.662354   13110 logs.go:282] 0 containers: []
	W1025 16:16:06.662367   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:16:06.662373   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:16:06.662378   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:16:06.676345   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:16:06.676355   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:16:06.690870   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:16:06.690881   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:16:06.702913   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:16:06.702923   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:16:06.740411   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:16:06.740422   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:16:06.769963   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:16:06.769973   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:16:06.781552   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:16:06.781563   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:16:06.792923   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:16:06.792936   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:16:06.810006   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:16:06.810020   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:16:06.824605   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:16:06.824616   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:16:06.838339   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:16:06.838349   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:16:06.877194   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:16:06.877203   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:16:06.881571   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:16:06.881579   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:16:06.895938   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:16:06.895947   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:16:06.909777   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:16:06.909787   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:16:09.436824   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:16:14.439234   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:16:14.439531   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:16:14.465473   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:16:14.465631   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:16:14.482353   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:16:14.482453   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:16:14.495897   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:16:14.495980   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:16:14.507541   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:16:14.507624   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:16:14.517669   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:16:14.517742   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:16:14.528848   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:16:14.528926   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:16:14.539922   13110 logs.go:282] 0 containers: []
	W1025 16:16:14.539934   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:16:14.540003   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:16:14.550099   13110 logs.go:282] 0 containers: []
	W1025 16:16:14.550112   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:16:14.550121   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:16:14.550127   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:16:14.554202   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:16:14.554210   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:16:14.588645   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:16:14.588658   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:16:14.604082   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:16:14.604093   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:16:14.641266   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:16:14.641275   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:16:14.659504   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:16:14.659518   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:16:14.673494   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:16:14.673503   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:16:14.699777   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:16:14.699784   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:16:14.711741   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:16:14.711751   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:16:14.737569   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:16:14.737578   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:16:14.749519   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:16:14.749530   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:16:14.767356   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:16:14.767367   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:16:14.778478   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:16:14.778493   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:16:14.791760   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:16:14.791771   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:16:14.803595   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:16:14.803606   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:16:17.323109   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:16:22.325417   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:16:22.325603   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:16:22.340769   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:16:22.340869   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:16:22.352980   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:16:22.353059   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:16:22.363719   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:16:22.363801   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:16:22.374508   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:16:22.374592   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:16:22.385003   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:16:22.385083   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:16:22.395675   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:16:22.395750   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:16:22.406154   13110 logs.go:282] 0 containers: []
	W1025 16:16:22.406168   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:16:22.406231   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:16:22.416510   13110 logs.go:282] 0 containers: []
	W1025 16:16:22.416523   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:16:22.416531   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:16:22.416536   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:16:22.435896   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:16:22.435908   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:16:22.454631   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:16:22.454643   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:16:22.466416   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:16:22.466427   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:16:22.470702   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:16:22.470710   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:16:22.505728   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:16:22.505741   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:16:22.517172   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:16:22.517187   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:16:22.540863   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:16:22.540870   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:16:22.565642   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:16:22.565653   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:16:22.589148   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:16:22.589158   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:16:22.603535   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:16:22.603546   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:16:22.617741   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:16:22.617750   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:16:22.629441   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:16:22.629450   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:16:22.645938   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:16:22.645948   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:16:22.683486   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:16:22.683494   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:16:25.196922   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:16:30.199245   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:16:30.199380   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:16:30.210654   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:16:30.210739   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:16:30.221092   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:16:30.221173   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:16:30.231425   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:16:30.231507   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:16:30.242016   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:16:30.242090   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:16:30.252653   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:16:30.252729   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:16:30.267120   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:16:30.267194   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:16:30.277559   13110 logs.go:282] 0 containers: []
	W1025 16:16:30.277572   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:16:30.277645   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:16:30.290009   13110 logs.go:282] 0 containers: []
	W1025 16:16:30.290023   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:16:30.290030   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:16:30.290037   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:16:30.328022   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:16:30.328035   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:16:30.332123   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:16:30.332129   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:16:30.345823   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:16:30.345834   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:16:30.360516   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:16:30.360525   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:16:30.372202   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:16:30.372213   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:16:30.390033   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:16:30.390043   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:16:30.403769   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:16:30.403783   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:16:30.429181   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:16:30.429194   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:16:30.442323   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:16:30.442338   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:16:30.456002   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:16:30.456016   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:16:30.483659   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:16:30.483675   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:16:30.502722   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:16:30.502734   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:16:30.539683   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:16:30.539694   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:16:30.555140   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:16:30.555149   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:16:33.069822   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:16:38.072069   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:16:38.072248   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:16:38.083510   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:16:38.083594   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:16:38.094331   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:16:38.094414   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:16:38.105513   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:16:38.105593   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:16:38.115681   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:16:38.115770   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:16:38.126200   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:16:38.126281   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:16:38.136846   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:16:38.136925   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:16:38.156076   13110 logs.go:282] 0 containers: []
	W1025 16:16:38.156091   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:16:38.156165   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:16:38.166170   13110 logs.go:282] 0 containers: []
	W1025 16:16:38.166181   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:16:38.166190   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:16:38.166197   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:16:38.171024   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:16:38.171030   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:16:38.185762   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:16:38.185771   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:16:38.197553   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:16:38.197568   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:16:38.209564   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:16:38.209575   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:16:38.223924   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:16:38.223936   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:16:38.255147   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:16:38.255161   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:16:38.268130   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:16:38.268142   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:16:38.286353   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:16:38.286364   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:16:38.327823   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:16:38.327833   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:16:38.346632   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:16:38.346644   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:16:38.373256   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:16:38.373268   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:16:38.412374   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:16:38.412392   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:16:38.427110   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:16:38.427123   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:16:38.439637   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:16:38.439653   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:16:40.955284   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:16:45.957824   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:16:45.958336   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:16:45.996486   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:16:45.996640   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:16:46.015377   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:16:46.015479   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:16:46.033153   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:16:46.033244   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:16:46.056844   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:16:46.056955   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:16:46.068365   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:16:46.068438   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:16:46.078833   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:16:46.078903   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:16:46.089084   13110 logs.go:282] 0 containers: []
	W1025 16:16:46.089099   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:16:46.089157   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:16:46.100825   13110 logs.go:282] 0 containers: []
	W1025 16:16:46.100836   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:16:46.100845   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:16:46.100851   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:16:46.127524   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:16:46.127536   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:16:46.146789   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:16:46.146803   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:16:46.173313   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:16:46.173328   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:16:46.188076   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:16:46.188085   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:16:46.200025   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:16:46.200039   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:16:46.212698   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:16:46.212713   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:16:46.225382   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:16:46.225395   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:16:46.264604   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:16:46.264615   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:16:46.303937   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:16:46.303950   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:16:46.323681   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:16:46.323693   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:16:46.336212   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:16:46.336225   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:16:46.341004   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:16:46.341016   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:16:46.355772   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:16:46.355786   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:16:46.371221   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:16:46.371234   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:16:48.890062   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:16:53.891837   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:16:53.892298   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:16:53.926306   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:16:53.926455   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:16:53.944936   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:16:53.945038   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:16:53.960171   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:16:53.960262   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:16:53.973364   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:16:53.973442   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:16:53.987767   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:16:53.987841   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:16:53.999577   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:16:53.999657   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:16:54.011030   13110 logs.go:282] 0 containers: []
	W1025 16:16:54.011040   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:16:54.011106   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:16:54.022826   13110 logs.go:282] 0 containers: []
	W1025 16:16:54.022837   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:16:54.022846   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:16:54.022851   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:16:54.039151   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:16:54.039168   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:16:54.055182   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:16:54.055197   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:16:54.093708   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:16:54.093720   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:16:54.098646   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:16:54.098662   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:16:54.112073   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:16:54.112088   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:16:54.132449   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:16:54.132463   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:16:54.159315   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:16:54.159326   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:16:54.191791   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:16:54.191809   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:16:54.205132   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:16:54.205147   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:16:54.222905   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:16:54.222922   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:16:54.236365   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:16:54.236379   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:16:54.249533   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:16:54.249544   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:16:54.262456   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:16:54.262467   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:16:54.303476   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:16:54.303486   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:16:56.819907   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:17:01.820096   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:17:01.820237   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:17:01.838515   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:17:01.838610   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:17:01.851731   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:17:01.851817   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:17:01.862664   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:17:01.862753   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:17:01.874160   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:17:01.874239   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:17:01.885491   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:17:01.885570   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:17:01.896832   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:17:01.896912   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:17:01.907564   13110 logs.go:282] 0 containers: []
	W1025 16:17:01.907575   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:17:01.907645   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:17:01.922600   13110 logs.go:282] 0 containers: []
	W1025 16:17:01.922611   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:17:01.922619   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:17:01.922625   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:17:01.937970   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:17:01.937982   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:17:01.953179   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:17:01.953191   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:17:01.965713   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:17:01.965725   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:17:01.978131   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:17:01.978142   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:17:01.982604   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:17:01.982613   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:17:02.023769   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:17:02.023782   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:17:02.038728   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:17:02.038741   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:17:02.065273   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:17:02.065286   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:17:02.078285   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:17:02.078297   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:17:02.097236   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:17:02.097251   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:17:02.110361   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:17:02.110373   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:17:02.147717   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:17:02.147731   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:17:02.174935   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:17:02.174946   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:17:02.194133   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:17:02.194143   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:17:04.721253   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:17:09.722038   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:17:09.722156   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:17:09.733792   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:17:09.733880   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:17:09.745539   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:17:09.745627   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:17:09.757118   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:17:09.757198   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:17:09.768223   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:17:09.768306   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:17:09.779308   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:17:09.779388   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:17:09.792086   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:17:09.792163   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:17:09.803754   13110 logs.go:282] 0 containers: []
	W1025 16:17:09.803804   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:17:09.803880   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:17:09.814584   13110 logs.go:282] 0 containers: []
	W1025 16:17:09.814594   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:17:09.814603   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:17:09.814608   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:17:09.855648   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:17:09.855668   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:17:09.871438   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:17:09.871454   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:17:09.890141   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:17:09.890157   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:17:09.917406   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:17:09.917426   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:17:09.933057   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:17:09.933067   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:17:09.962022   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:17:09.962031   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:17:09.966607   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:17:09.966620   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:17:10.004357   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:17:10.004369   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:17:10.023372   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:17:10.023383   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:17:10.037916   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:17:10.037926   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:17:10.050099   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:17:10.050110   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:17:10.062289   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:17:10.062299   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:17:10.074314   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:17:10.074325   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:17:10.092437   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:17:10.092447   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:17:12.604812   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:17:17.605266   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:17:17.605366   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:17:17.617031   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:17:17.617114   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:17:17.629246   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:17:17.629341   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:17:17.640205   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:17:17.640280   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:17:17.652191   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:17:17.652274   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:17:17.665042   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:17:17.665125   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:17:17.676583   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:17:17.676661   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:17:17.687744   13110 logs.go:282] 0 containers: []
	W1025 16:17:17.687759   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:17:17.687830   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:17:17.701396   13110 logs.go:282] 0 containers: []
	W1025 16:17:17.701409   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:17:17.701419   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:17:17.701425   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:17:17.743670   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:17:17.743690   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:17:17.748262   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:17:17.748269   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:17:17.760002   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:17:17.760016   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:17:17.785745   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:17:17.785768   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:17:17.823858   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:17:17.823871   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:17:17.836347   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:17:17.836359   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:17:17.859705   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:17:17.859715   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:17:17.872810   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:17:17.872820   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:17:17.887186   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:17:17.887197   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:17:17.912960   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:17:17.912971   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:17:17.926828   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:17:17.926838   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:17:17.941254   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:17:17.941265   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:17:17.953599   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:17:17.953611   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:17:17.968608   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:17:17.968619   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:17:20.482441   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:17:25.484575   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:17:25.484724   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:17:25.496834   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:17:25.496932   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:17:25.507962   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:17:25.508032   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:17:25.519674   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:17:25.519752   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:17:25.531331   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:17:25.531407   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:17:25.543047   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:17:25.543123   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:17:25.554661   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:17:25.554735   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:17:25.565827   13110 logs.go:282] 0 containers: []
	W1025 16:17:25.565839   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:17:25.565907   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:17:25.579571   13110 logs.go:282] 0 containers: []
	W1025 16:17:25.579581   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:17:25.579588   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:17:25.579593   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:17:25.593786   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:17:25.593800   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:17:25.606616   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:17:25.606628   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:17:25.631576   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:17:25.631585   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:17:25.668298   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:17:25.668316   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:17:25.694850   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:17:25.694861   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:17:25.706306   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:17:25.706318   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:17:25.720992   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:17:25.721003   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:17:25.738849   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:17:25.738860   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:17:25.775347   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:17:25.775358   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:17:25.796402   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:17:25.796412   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:17:25.808440   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:17:25.808451   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:17:25.820618   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:17:25.820628   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:17:25.832503   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:17:25.832528   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:17:25.837016   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:17:25.837022   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:17:28.352762   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:17:33.355067   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:17:33.355164   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:17:33.367827   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:17:33.367909   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:17:33.379368   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:17:33.379456   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:17:33.391188   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:17:33.391275   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:17:33.402743   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:17:33.402831   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:17:33.413444   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:17:33.413532   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:17:33.424908   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:17:33.424997   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:17:33.436007   13110 logs.go:282] 0 containers: []
	W1025 16:17:33.436020   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:17:33.436092   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:17:33.447095   13110 logs.go:282] 0 containers: []
	W1025 16:17:33.447106   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:17:33.447115   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:17:33.447120   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:17:33.474755   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:17:33.474768   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:17:33.488574   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:17:33.488587   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:17:33.514933   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:17:33.514943   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:17:33.555929   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:17:33.555947   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:17:33.593701   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:17:33.593712   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:17:33.608736   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:17:33.608746   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:17:33.622721   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:17:33.622732   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:17:33.636934   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:17:33.636945   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:17:33.648100   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:17:33.648112   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:17:33.652784   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:17:33.652793   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:17:33.664332   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:17:33.664343   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:17:33.681089   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:17:33.681098   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:17:33.693913   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:17:33.693925   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:17:33.706028   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:17:33.706038   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:17:36.222423   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:17:41.223787   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:17:41.223880   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:17:41.235111   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:17:41.235194   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:17:41.245791   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:17:41.245874   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:17:41.257863   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:17:41.258069   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:17:41.269571   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:17:41.269666   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:17:41.283289   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:17:41.283360   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:17:41.295497   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:17:41.295573   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:17:41.306138   13110 logs.go:282] 0 containers: []
	W1025 16:17:41.306151   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:17:41.306259   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:17:41.317638   13110 logs.go:282] 0 containers: []
	W1025 16:17:41.317650   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:17:41.317657   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:17:41.317662   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:17:41.344098   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:17:41.344107   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:17:41.366064   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:17:41.366076   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:17:41.395601   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:17:41.395613   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:17:41.410500   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:17:41.410512   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:17:41.414957   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:17:41.414969   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:17:41.430024   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:17:41.430036   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:17:41.446391   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:17:41.446401   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:17:41.461596   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:17:41.461607   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:17:41.474591   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:17:41.474602   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:17:41.487247   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:17:41.487256   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:17:41.506711   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:17:41.506727   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:17:41.545819   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:17:41.545828   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:17:41.581160   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:17:41.581170   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:17:41.592851   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:17:41.592861   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:17:44.120328   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:17:49.121894   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:17:49.121993   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:17:49.133638   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:17:49.133724   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:17:49.145669   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:17:49.145749   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:17:49.158647   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:17:49.158732   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:17:49.178963   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:17:49.179051   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:17:49.189773   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:17:49.189855   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:17:49.203052   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:17:49.203155   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:17:49.213713   13110 logs.go:282] 0 containers: []
	W1025 16:17:49.213726   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:17:49.213802   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:17:49.224714   13110 logs.go:282] 0 containers: []
	W1025 16:17:49.224724   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:17:49.224731   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:17:49.224736   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:17:49.242967   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:17:49.242976   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:17:49.280237   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:17:49.280250   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:17:49.299372   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:17:49.299386   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:17:49.325812   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:17:49.325823   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:17:49.338658   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:17:49.338670   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:17:49.353829   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:17:49.353846   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:17:49.373390   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:17:49.373400   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:17:49.385780   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:17:49.385794   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:17:49.410300   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:17:49.410317   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:17:49.425678   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:17:49.425691   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:17:49.438208   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:17:49.438217   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:17:49.450425   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:17:49.450439   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:17:49.488233   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:17:49.488244   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:17:49.492979   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:17:49.492985   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:17:52.010396   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:17:57.012701   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:17:57.012813   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:17:57.031014   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:17:57.031093   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:17:57.042709   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:17:57.042793   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:17:57.054029   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:17:57.054128   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:17:57.066147   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:17:57.066233   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:17:57.079230   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:17:57.079315   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:17:57.091165   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:17:57.091254   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:17:57.102924   13110 logs.go:282] 0 containers: []
	W1025 16:17:57.102935   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:17:57.103007   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:17:57.115473   13110 logs.go:282] 0 containers: []
	W1025 16:17:57.115488   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:17:57.115499   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:17:57.115505   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:17:57.131129   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:17:57.131138   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:17:57.149498   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:17:57.149510   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:17:57.167879   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:17:57.167894   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:17:57.186394   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:17:57.186405   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:17:57.211593   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:17:57.211611   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:17:57.216698   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:17:57.216716   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:17:57.243028   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:17:57.243041   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:17:57.257977   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:17:57.257992   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:17:57.273682   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:17:57.273694   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:17:57.288331   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:17:57.288342   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:17:57.327184   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:17:57.327199   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:17:57.342055   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:17:57.342069   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:17:57.354382   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:17:57.354394   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:17:57.369146   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:17:57.369156   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:17:59.911772   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:18:04.914238   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:18:04.914341   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:18:04.925880   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:18:04.925968   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:18:04.937253   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:18:04.937335   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:18:04.950781   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:18:04.950864   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:18:04.962903   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:18:04.962990   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:18:04.973876   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:18:04.973955   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:18:04.985160   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:18:04.985243   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:18:05.001158   13110 logs.go:282] 0 containers: []
	W1025 16:18:05.001171   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:18:05.001242   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:18:05.012190   13110 logs.go:282] 0 containers: []
	W1025 16:18:05.012202   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:18:05.012209   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:18:05.012214   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:18:05.027685   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:18:05.027696   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:18:05.046227   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:18:05.046241   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:18:05.058040   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:18:05.058050   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:18:05.075852   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:18:05.075863   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:18:05.094920   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:18:05.094932   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:18:05.120533   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:18:05.120557   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:18:05.161704   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:18:05.161723   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:18:05.177518   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:18:05.177534   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:18:05.182059   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:18:05.182069   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:18:05.220717   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:18:05.220731   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:18:05.233376   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:18:05.233390   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:18:05.257510   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:18:05.257522   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:18:05.269392   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:18:05.269403   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:18:05.283083   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:18:05.283095   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:18:07.797003   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:18:12.798848   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:18:12.798931   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:18:12.810237   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:18:12.810315   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:18:12.821706   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:18:12.821790   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:18:12.833395   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:18:12.833478   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:18:12.843984   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:18:12.844063   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:18:12.854773   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:18:12.854852   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:18:12.866572   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:18:12.866650   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:18:12.878755   13110 logs.go:282] 0 containers: []
	W1025 16:18:12.878766   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:18:12.878840   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:18:12.890444   13110 logs.go:282] 0 containers: []
	W1025 16:18:12.890455   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:18:12.890463   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:18:12.890469   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:18:12.931197   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:18:12.931210   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:18:12.969021   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:18:12.969033   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:18:12.981455   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:18:12.981468   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:18:12.994334   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:18:12.994349   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:18:13.018246   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:18:13.018258   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:18:13.022503   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:18:13.022511   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:18:13.034144   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:18:13.034160   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:18:13.049997   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:18:13.050015   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:18:13.062824   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:18:13.062839   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:18:13.078333   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:18:13.078351   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:18:13.108981   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:18:13.108991   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:18:13.123065   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:18:13.123077   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:18:13.137970   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:18:13.137980   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:18:13.149453   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:18:13.149467   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:18:15.672376   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:18:20.674456   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:18:20.674516   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:18:20.686034   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:18:20.686073   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:18:20.697393   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:18:20.697479   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:18:20.708761   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:18:20.708844   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:18:20.720508   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:18:20.720596   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:18:20.731734   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:18:20.731815   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:18:20.743206   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:18:20.743289   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:18:20.755226   13110 logs.go:282] 0 containers: []
	W1025 16:18:20.755239   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:18:20.755313   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:18:20.770291   13110 logs.go:282] 0 containers: []
	W1025 16:18:20.770302   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:18:20.770310   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:18:20.770314   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:18:20.785574   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:18:20.785589   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:18:20.801503   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:18:20.801519   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:18:20.806198   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:18:20.806210   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:18:20.844049   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:18:20.844060   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:18:20.860370   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:18:20.860384   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:18:20.873068   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:18:20.873077   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:18:20.886722   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:18:20.886736   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:18:20.899409   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:18:20.899422   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:18:20.939181   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:18:20.939194   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:18:20.954445   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:18:20.954459   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:18:20.970529   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:18:20.970540   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:18:20.987467   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:18:20.987477   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:18:21.010824   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:18:21.010833   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:18:21.036460   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:18:21.036471   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:18:23.552186   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:18:28.554367   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:18:28.554467   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:18:28.566256   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:18:28.566342   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:18:28.580276   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:18:28.580358   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:18:28.592306   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:18:28.592389   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:18:28.603418   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:18:28.603501   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:18:28.615558   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:18:28.615641   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:18:28.627555   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:18:28.627631   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:18:28.638270   13110 logs.go:282] 0 containers: []
	W1025 16:18:28.638281   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:18:28.638353   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:18:28.649277   13110 logs.go:282] 0 containers: []
	W1025 16:18:28.649297   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:18:28.649363   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:18:28.649376   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:18:28.671731   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:18:28.671741   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:18:28.692548   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:18:28.692559   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:18:28.708268   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:18:28.708279   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:18:28.723440   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:18:28.723455   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:18:28.734903   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:18:28.734916   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:18:28.747750   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:18:28.747762   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:18:28.771742   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:18:28.771763   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:18:28.811644   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:18:28.811661   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:18:28.839308   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:18:28.839322   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:18:28.854960   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:18:28.854975   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:18:28.873768   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:18:28.873782   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:18:28.891443   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:18:28.891457   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:18:28.895596   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:18:28.895602   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:18:28.933931   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:18:28.933948   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:18:31.450182   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:18:36.452400   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:18:36.452482   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:18:36.464029   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:18:36.464109   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:18:36.475237   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:18:36.475315   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:18:36.493845   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:18:36.493925   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:18:36.505240   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:18:36.505324   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:18:36.518343   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:18:36.518421   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:18:36.530025   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:18:36.530107   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:18:36.541259   13110 logs.go:282] 0 containers: []
	W1025 16:18:36.541272   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:18:36.541347   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:18:36.552015   13110 logs.go:282] 0 containers: []
	W1025 16:18:36.552027   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:18:36.552034   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:18:36.552039   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:18:36.567493   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:18:36.567503   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:18:36.582329   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:18:36.582339   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:18:36.599534   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:18:36.599547   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:18:36.624358   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:18:36.624375   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:18:36.651332   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:18:36.651346   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:18:36.668258   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:18:36.668275   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:18:36.682639   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:18:36.682651   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:18:36.695562   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:18:36.695574   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:18:36.709157   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:18:36.709168   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:18:36.748233   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:18:36.748245   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:18:36.752431   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:18:36.752439   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:18:36.763797   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:18:36.763807   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:18:36.779354   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:18:36.779363   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:18:36.812660   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:18:36.812670   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:18:39.338162   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:18:44.340312   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:18:44.340426   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:18:44.352115   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:18:44.352199   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:18:44.363781   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:18:44.363868   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:18:44.375046   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:18:44.375129   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:18:44.386426   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:18:44.386518   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:18:44.399523   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:18:44.399606   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:18:44.411181   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:18:44.411267   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:18:44.422558   13110 logs.go:282] 0 containers: []
	W1025 16:18:44.422570   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:18:44.422637   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:18:44.434251   13110 logs.go:282] 0 containers: []
	W1025 16:18:44.434264   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:18:44.434275   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:18:44.434283   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:18:44.477280   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:18:44.477290   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:18:44.492742   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:18:44.492751   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:18:44.508041   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:18:44.508056   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:18:44.520602   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:18:44.520613   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:18:44.546963   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:18:44.546981   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:18:44.563471   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:18:44.563483   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:18:44.583432   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:18:44.583449   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:18:44.611645   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:18:44.611656   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:18:44.635130   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:18:44.635141   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:18:44.647564   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:18:44.647576   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:18:44.663946   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:18:44.663956   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:18:44.668352   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:18:44.668359   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:18:44.701908   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:18:44.701923   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:18:44.716051   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:18:44.716061   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:18:47.229922   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:18:52.232170   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:18:52.232240   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:18:52.243894   13110 logs.go:282] 2 containers: [97cb9ebf11df 0f8b5253d658]
	I1025 16:18:52.243976   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:18:52.255358   13110 logs.go:282] 2 containers: [749860c265a7 8ef6282e225f]
	I1025 16:18:52.255442   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:18:52.266083   13110 logs.go:282] 1 containers: [e8629f14c08e]
	I1025 16:18:52.266168   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:18:52.277759   13110 logs.go:282] 2 containers: [c76dee747021 7f47591e8309]
	I1025 16:18:52.277855   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:18:52.291282   13110 logs.go:282] 1 containers: [71231fae8497]
	I1025 16:18:52.291371   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:18:52.305188   13110 logs.go:282] 2 containers: [f7a9851991c9 50a050a9e75c]
	I1025 16:18:52.305293   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:18:52.316924   13110 logs.go:282] 0 containers: []
	W1025 16:18:52.316937   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:18:52.317013   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:18:52.329037   13110 logs.go:282] 0 containers: []
	W1025 16:18:52.329048   13110 logs.go:284] No container was found matching "storage-provisioner"
	I1025 16:18:52.329056   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:18:52.329061   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:18:52.368061   13110 logs.go:123] Gathering logs for etcd [8ef6282e225f] ...
	I1025 16:18:52.368076   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ef6282e225f"
	I1025 16:18:52.383366   13110 logs.go:123] Gathering logs for kube-scheduler [c76dee747021] ...
	I1025 16:18:52.383383   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c76dee747021"
	I1025 16:18:52.395666   13110 logs.go:123] Gathering logs for kube-scheduler [7f47591e8309] ...
	I1025 16:18:52.395679   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f47591e8309"
	I1025 16:18:52.411885   13110 logs.go:123] Gathering logs for kube-controller-manager [50a050a9e75c] ...
	I1025 16:18:52.411896   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50a050a9e75c"
	I1025 16:18:52.424907   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:18:52.424915   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:18:52.429208   13110 logs.go:123] Gathering logs for kube-apiserver [97cb9ebf11df] ...
	I1025 16:18:52.429224   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97cb9ebf11df"
	I1025 16:18:52.443877   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:18:52.443894   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:18:52.456404   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:18:52.456417   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:18:52.493235   13110 logs.go:123] Gathering logs for kube-apiserver [0f8b5253d658] ...
	I1025 16:18:52.493248   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f8b5253d658"
	I1025 16:18:52.519203   13110 logs.go:123] Gathering logs for etcd [749860c265a7] ...
	I1025 16:18:52.519215   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 749860c265a7"
	I1025 16:18:52.532922   13110 logs.go:123] Gathering logs for coredns [e8629f14c08e] ...
	I1025 16:18:52.532931   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8629f14c08e"
	I1025 16:18:52.544681   13110 logs.go:123] Gathering logs for kube-proxy [71231fae8497] ...
	I1025 16:18:52.544693   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71231fae8497"
	I1025 16:18:52.556149   13110 logs.go:123] Gathering logs for kube-controller-manager [f7a9851991c9] ...
	I1025 16:18:52.556159   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7a9851991c9"
	I1025 16:18:52.573202   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:18:52.573212   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:18:55.099548   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:19:00.102113   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:19:00.102153   13110 kubeadm.go:597] duration metric: took 4m3.09195725s to restartPrimaryControlPlane
	W1025 16:19:00.102184   13110 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1025 16:19:00.102197   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1025 16:19:01.119750   13110 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.017548167s)
	I1025 16:19:01.119823   13110 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 16:19:01.125053   13110 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 16:19:01.128015   13110 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 16:19:01.130809   13110 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 16:19:01.130815   13110 kubeadm.go:157] found existing configuration files:
	
	I1025 16:19:01.130849   13110 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62397 /etc/kubernetes/admin.conf
	I1025 16:19:01.133485   13110 kubeadm.go:163] "https://control-plane.minikube.internal:62397" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:62397 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 16:19:01.133513   13110 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 16:19:01.136308   13110 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62397 /etc/kubernetes/kubelet.conf
	I1025 16:19:01.138828   13110 kubeadm.go:163] "https://control-plane.minikube.internal:62397" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:62397 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 16:19:01.138863   13110 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 16:19:01.141879   13110 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62397 /etc/kubernetes/controller-manager.conf
	I1025 16:19:01.145231   13110 kubeadm.go:163] "https://control-plane.minikube.internal:62397" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:62397 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 16:19:01.145259   13110 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 16:19:01.148191   13110 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62397 /etc/kubernetes/scheduler.conf
	I1025 16:19:01.150740   13110 kubeadm.go:163] "https://control-plane.minikube.internal:62397" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:62397 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 16:19:01.150766   13110 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 16:19:01.153836   13110 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1025 16:19:01.172314   13110 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1025 16:19:01.172380   13110 kubeadm.go:310] [preflight] Running pre-flight checks
	I1025 16:19:01.221600   13110 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 16:19:01.221658   13110 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 16:19:01.221710   13110 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 16:19:01.277969   13110 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 16:19:01.282202   13110 out.go:235]   - Generating certificates and keys ...
	I1025 16:19:01.282239   13110 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1025 16:19:01.282290   13110 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1025 16:19:01.282333   13110 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1025 16:19:01.282365   13110 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1025 16:19:01.282424   13110 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1025 16:19:01.282454   13110 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1025 16:19:01.282488   13110 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1025 16:19:01.282527   13110 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1025 16:19:01.282572   13110 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1025 16:19:01.282647   13110 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1025 16:19:01.282686   13110 kubeadm.go:310] [certs] Using the existing "sa" key
	I1025 16:19:01.282721   13110 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 16:19:01.426410   13110 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 16:19:01.545830   13110 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 16:19:01.638698   13110 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 16:19:01.758627   13110 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 16:19:01.787215   13110 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 16:19:01.787615   13110 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 16:19:01.787635   13110 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1025 16:19:01.870489   13110 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 16:19:01.874691   13110 out.go:235]   - Booting up control plane ...
	I1025 16:19:01.874742   13110 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 16:19:01.874774   13110 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 16:19:01.874818   13110 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 16:19:01.874862   13110 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 16:19:01.874970   13110 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 16:19:06.878654   13110 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.008494 seconds
	I1025 16:19:06.878715   13110 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 16:19:06.882031   13110 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 16:19:07.392646   13110 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 16:19:07.392810   13110 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-782000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 16:19:07.896838   13110 kubeadm.go:310] [bootstrap-token] Using token: kbudsc.ttqy1u5ja78iqr90
	I1025 16:19:07.903390   13110 out.go:235]   - Configuring RBAC rules ...
	I1025 16:19:07.903459   13110 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 16:19:07.903513   13110 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 16:19:07.905339   13110 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 16:19:07.910228   13110 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 16:19:07.910873   13110 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 16:19:07.911801   13110 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 16:19:07.914928   13110 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 16:19:08.061769   13110 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1025 16:19:08.301911   13110 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1025 16:19:08.302516   13110 kubeadm.go:310] 
	I1025 16:19:08.302618   13110 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1025 16:19:08.302631   13110 kubeadm.go:310] 
	I1025 16:19:08.302669   13110 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1025 16:19:08.302676   13110 kubeadm.go:310] 
	I1025 16:19:08.302693   13110 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1025 16:19:08.302724   13110 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 16:19:08.302767   13110 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 16:19:08.302777   13110 kubeadm.go:310] 
	I1025 16:19:08.302807   13110 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1025 16:19:08.302817   13110 kubeadm.go:310] 
	I1025 16:19:08.302842   13110 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 16:19:08.302844   13110 kubeadm.go:310] 
	I1025 16:19:08.302873   13110 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1025 16:19:08.302914   13110 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 16:19:08.302948   13110 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 16:19:08.302951   13110 kubeadm.go:310] 
	I1025 16:19:08.302996   13110 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 16:19:08.303043   13110 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1025 16:19:08.303053   13110 kubeadm.go:310] 
	I1025 16:19:08.303098   13110 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token kbudsc.ttqy1u5ja78iqr90 \
	I1025 16:19:08.303319   13110 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0ffd2fddcca542d38aed4b14aa54bdac916e7b257b7596865a537c11b5cfb0fe \
	I1025 16:19:08.303337   13110 kubeadm.go:310] 	--control-plane 
	I1025 16:19:08.303343   13110 kubeadm.go:310] 
	I1025 16:19:08.303391   13110 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1025 16:19:08.303395   13110 kubeadm.go:310] 
	I1025 16:19:08.303466   13110 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token kbudsc.ttqy1u5ja78iqr90 \
	I1025 16:19:08.303519   13110 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0ffd2fddcca542d38aed4b14aa54bdac916e7b257b7596865a537c11b5cfb0fe 
	I1025 16:19:08.303612   13110 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 16:19:08.303625   13110 cni.go:84] Creating CNI manager for ""
	I1025 16:19:08.303633   13110 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 16:19:08.306303   13110 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1025 16:19:08.314220   13110 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1025 16:19:08.319083   13110 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1025 16:19:08.330017   13110 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 16:19:08.330092   13110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 16:19:08.330132   13110 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-782000 minikube.k8s.io/updated_at=2024_10_25T16_19_08_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=db65f53f04c460e02d289e77cb94648c116e89dc minikube.k8s.io/name=stopped-upgrade-782000 minikube.k8s.io/primary=true
	I1025 16:19:08.370036   13110 kubeadm.go:1113] duration metric: took 40.009666ms to wait for elevateKubeSystemPrivileges
	I1025 16:19:08.370058   13110 ops.go:34] apiserver oom_adj: -16
	I1025 16:19:08.370166   13110 kubeadm.go:394] duration metric: took 4m11.373879375s to StartCluster
	I1025 16:19:08.370178   13110 settings.go:142] acquiring lock: {Name:mkc7ffce42494ff0056038ca2482eba326c60c40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:19:08.370277   13110 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:19:08.370676   13110 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19758-10490/kubeconfig: {Name:mkab4c8ddad2dcb8cd5939090920ae3e3753785d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:19:08.370864   13110 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:19:08.370903   13110 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 16:19:08.370976   13110 config.go:182] Loaded profile config "stopped-upgrade-782000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 16:19:08.370984   13110 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-782000"
	I1025 16:19:08.370991   13110 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-782000"
	W1025 16:19:08.370994   13110 addons.go:243] addon storage-provisioner should already be in state true
	I1025 16:19:08.371005   13110 host.go:66] Checking if "stopped-upgrade-782000" exists ...
	I1025 16:19:08.370991   13110 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-782000"
	I1025 16:19:08.371028   13110 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-782000"
	I1025 16:19:08.371466   13110 retry.go:31] will retry after 1.056223176s: connect: dial unix /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/stopped-upgrade-782000/monitor: connect: connection refused
	I1025 16:19:08.372246   13110 kapi.go:59] client config for stopped-upgrade-782000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/stopped-upgrade-782000/client.key", CAFile:"/Users/jenkins/minikube-integration/19758-10490/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106a82510), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 16:19:08.372372   13110 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-782000"
	W1025 16:19:08.372376   13110 addons.go:243] addon default-storageclass should already be in state true
	I1025 16:19:08.372382   13110 host.go:66] Checking if "stopped-upgrade-782000" exists ...
	I1025 16:19:08.372911   13110 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 16:19:08.372916   13110 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 16:19:08.372921   13110 sshutil.go:53] new ssh client: &{IP:localhost Port:62363 SSHKeyPath:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/stopped-upgrade-782000/id_rsa Username:docker}
	I1025 16:19:08.375285   13110 out.go:177] * Verifying Kubernetes components...
	I1025 16:19:08.385231   13110 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 16:19:08.472668   13110 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 16:19:08.478400   13110 api_server.go:52] waiting for apiserver process to appear ...
	I1025 16:19:08.478451   13110 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 16:19:08.482877   13110 api_server.go:72] duration metric: took 112.00025ms to wait for apiserver process to appear ...
	I1025 16:19:08.482887   13110 api_server.go:88] waiting for apiserver healthz status ...
	I1025 16:19:08.482893   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:19:08.540204   13110 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 16:19:08.870134   13110 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1025 16:19:08.870146   13110 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1025 16:19:09.432125   13110 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 16:19:09.436147   13110 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 16:19:09.436154   13110 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 16:19:09.436161   13110 sshutil.go:53] new ssh client: &{IP:localhost Port:62363 SSHKeyPath:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/stopped-upgrade-782000/id_rsa Username:docker}
	I1025 16:19:09.475531   13110 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 16:19:13.484905   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:19:13.484930   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:19:18.485150   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:19:18.485203   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:19:23.485508   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:19:23.485544   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:19:28.485932   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:19:28.485962   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:19:33.486439   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:19:33.486462   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:19:38.487078   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:19:38.487116   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1025 16:19:38.870903   13110 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1025 16:19:38.876252   13110 out.go:177] * Enabled addons: storage-provisioner
	I1025 16:19:38.887061   13110 addons.go:510] duration metric: took 30.516376583s for enable addons: enabled=[storage-provisioner]
	I1025 16:19:43.487958   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:19:43.488017   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:19:48.489373   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:19:48.489417   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:19:53.490834   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:19:53.490884   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:19:58.492649   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:19:58.492690   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:20:03.494849   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:20:03.494875   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:20:08.495213   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:20:08.495421   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:20:08.512604   13110 logs.go:282] 1 containers: [a9c44d19c2b3]
	I1025 16:20:08.512695   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:20:08.523414   13110 logs.go:282] 1 containers: [1047955b75c8]
	I1025 16:20:08.523496   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:20:08.534009   13110 logs.go:282] 2 containers: [e387746d72b2 4a027988dfff]
	I1025 16:20:08.534089   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:20:08.544779   13110 logs.go:282] 1 containers: [b2a2f830ccab]
	I1025 16:20:08.544863   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:20:08.556304   13110 logs.go:282] 1 containers: [b608fbf7ec7a]
	I1025 16:20:08.556376   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:20:08.566523   13110 logs.go:282] 1 containers: [2c7994bb1341]
	I1025 16:20:08.566598   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:20:08.583182   13110 logs.go:282] 0 containers: []
	W1025 16:20:08.583193   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:20:08.583267   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:20:08.593478   13110 logs.go:282] 1 containers: [136604e1b2f3]
	I1025 16:20:08.593494   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:20:08.593500   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:20:08.632248   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:20:08.632258   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:20:08.636653   13110 logs.go:123] Gathering logs for kube-apiserver [a9c44d19c2b3] ...
	I1025 16:20:08.636661   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c44d19c2b3"
	I1025 16:20:08.651603   13110 logs.go:123] Gathering logs for kube-proxy [b608fbf7ec7a] ...
	I1025 16:20:08.651613   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b608fbf7ec7a"
	I1025 16:20:08.664654   13110 logs.go:123] Gathering logs for kube-controller-manager [2c7994bb1341] ...
	I1025 16:20:08.664665   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c7994bb1341"
	I1025 16:20:08.682785   13110 logs.go:123] Gathering logs for storage-provisioner [136604e1b2f3] ...
	I1025 16:20:08.682795   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136604e1b2f3"
	I1025 16:20:08.694147   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:20:08.694159   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:20:08.734407   13110 logs.go:123] Gathering logs for etcd [1047955b75c8] ...
	I1025 16:20:08.734419   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1047955b75c8"
	I1025 16:20:08.761529   13110 logs.go:123] Gathering logs for coredns [e387746d72b2] ...
	I1025 16:20:08.761542   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e387746d72b2"
	I1025 16:20:08.773208   13110 logs.go:123] Gathering logs for coredns [4a027988dfff] ...
	I1025 16:20:08.773219   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a027988dfff"
	I1025 16:20:08.785097   13110 logs.go:123] Gathering logs for kube-scheduler [b2a2f830ccab] ...
	I1025 16:20:08.785108   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2a2f830ccab"
	I1025 16:20:08.799798   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:20:08.799809   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:20:08.824338   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:20:08.824347   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:20:11.338283   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:20:16.341175   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:20:16.341574   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:20:16.373266   13110 logs.go:282] 1 containers: [a9c44d19c2b3]
	I1025 16:20:16.373411   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:20:16.392596   13110 logs.go:282] 1 containers: [1047955b75c8]
	I1025 16:20:16.392696   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:20:16.407260   13110 logs.go:282] 2 containers: [e387746d72b2 4a027988dfff]
	I1025 16:20:16.407342   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:20:16.419083   13110 logs.go:282] 1 containers: [b2a2f830ccab]
	I1025 16:20:16.419162   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:20:16.430854   13110 logs.go:282] 1 containers: [b608fbf7ec7a]
	I1025 16:20:16.430933   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:20:16.441419   13110 logs.go:282] 1 containers: [2c7994bb1341]
	I1025 16:20:16.441490   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:20:16.451179   13110 logs.go:282] 0 containers: []
	W1025 16:20:16.451192   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:20:16.451256   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:20:16.461446   13110 logs.go:282] 1 containers: [136604e1b2f3]
	I1025 16:20:16.461461   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:20:16.461470   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:20:16.472639   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:20:16.472652   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:20:16.510943   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:20:16.510954   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:20:16.514981   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:20:16.514990   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:20:16.551011   13110 logs.go:123] Gathering logs for kube-apiserver [a9c44d19c2b3] ...
	I1025 16:20:16.551022   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c44d19c2b3"
	I1025 16:20:16.565271   13110 logs.go:123] Gathering logs for etcd [1047955b75c8] ...
	I1025 16:20:16.565282   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1047955b75c8"
	I1025 16:20:16.581226   13110 logs.go:123] Gathering logs for kube-controller-manager [2c7994bb1341] ...
	I1025 16:20:16.581238   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c7994bb1341"
	I1025 16:20:16.601835   13110 logs.go:123] Gathering logs for storage-provisioner [136604e1b2f3] ...
	I1025 16:20:16.601848   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136604e1b2f3"
	I1025 16:20:16.614259   13110 logs.go:123] Gathering logs for coredns [e387746d72b2] ...
	I1025 16:20:16.614270   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e387746d72b2"
	I1025 16:20:16.625474   13110 logs.go:123] Gathering logs for coredns [4a027988dfff] ...
	I1025 16:20:16.625488   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a027988dfff"
	I1025 16:20:16.637503   13110 logs.go:123] Gathering logs for kube-scheduler [b2a2f830ccab] ...
	I1025 16:20:16.637515   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2a2f830ccab"
	I1025 16:20:16.651882   13110 logs.go:123] Gathering logs for kube-proxy [b608fbf7ec7a] ...
	I1025 16:20:16.651895   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b608fbf7ec7a"
	I1025 16:20:16.663166   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:20:16.663179   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:20:19.187579   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:20:24.188588   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:20:24.188664   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:20:24.199846   13110 logs.go:282] 1 containers: [a9c44d19c2b3]
	I1025 16:20:24.199910   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:20:24.210419   13110 logs.go:282] 1 containers: [1047955b75c8]
	I1025 16:20:24.210488   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:20:24.220661   13110 logs.go:282] 2 containers: [e387746d72b2 4a027988dfff]
	I1025 16:20:24.220730   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:20:24.231624   13110 logs.go:282] 1 containers: [b2a2f830ccab]
	I1025 16:20:24.231683   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:20:24.242327   13110 logs.go:282] 1 containers: [b608fbf7ec7a]
	I1025 16:20:24.242394   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:20:24.255349   13110 logs.go:282] 1 containers: [2c7994bb1341]
	I1025 16:20:24.255426   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:20:24.267528   13110 logs.go:282] 0 containers: []
	W1025 16:20:24.267540   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:20:24.267601   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:20:24.277789   13110 logs.go:282] 1 containers: [136604e1b2f3]
	I1025 16:20:24.277805   13110 logs.go:123] Gathering logs for kube-proxy [b608fbf7ec7a] ...
	I1025 16:20:24.277812   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b608fbf7ec7a"
	I1025 16:20:24.289299   13110 logs.go:123] Gathering logs for kube-apiserver [a9c44d19c2b3] ...
	I1025 16:20:24.289309   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c44d19c2b3"
	I1025 16:20:24.303893   13110 logs.go:123] Gathering logs for coredns [4a027988dfff] ...
	I1025 16:20:24.303903   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a027988dfff"
	I1025 16:20:24.315878   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:20:24.315886   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:20:24.354755   13110 logs.go:123] Gathering logs for etcd [1047955b75c8] ...
	I1025 16:20:24.354767   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1047955b75c8"
	I1025 16:20:24.369221   13110 logs.go:123] Gathering logs for coredns [e387746d72b2] ...
	I1025 16:20:24.369232   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e387746d72b2"
	I1025 16:20:24.380405   13110 logs.go:123] Gathering logs for kube-scheduler [b2a2f830ccab] ...
	I1025 16:20:24.380415   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2a2f830ccab"
	I1025 16:20:24.394495   13110 logs.go:123] Gathering logs for kube-controller-manager [2c7994bb1341] ...
	I1025 16:20:24.394506   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c7994bb1341"
	I1025 16:20:24.411601   13110 logs.go:123] Gathering logs for storage-provisioner [136604e1b2f3] ...
	I1025 16:20:24.411612   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136604e1b2f3"
	I1025 16:20:24.423211   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:20:24.423221   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:20:24.461206   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:20:24.461214   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:20:24.465702   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:20:24.465708   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:20:24.488931   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:20:24.488941   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:20:27.006894   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:20:32.009752   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:20:32.010348   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:20:32.049393   13110 logs.go:282] 1 containers: [a9c44d19c2b3]
	I1025 16:20:32.049548   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:20:32.072009   13110 logs.go:282] 1 containers: [1047955b75c8]
	I1025 16:20:32.072138   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:20:32.087641   13110 logs.go:282] 2 containers: [e387746d72b2 4a027988dfff]
	I1025 16:20:32.087739   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:20:32.099949   13110 logs.go:282] 1 containers: [b2a2f830ccab]
	I1025 16:20:32.100026   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:20:32.111489   13110 logs.go:282] 1 containers: [b608fbf7ec7a]
	I1025 16:20:32.111569   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:20:32.123015   13110 logs.go:282] 1 containers: [2c7994bb1341]
	I1025 16:20:32.123095   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:20:32.133477   13110 logs.go:282] 0 containers: []
	W1025 16:20:32.133489   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:20:32.133555   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:20:32.144280   13110 logs.go:282] 1 containers: [136604e1b2f3]
	I1025 16:20:32.144297   13110 logs.go:123] Gathering logs for etcd [1047955b75c8] ...
	I1025 16:20:32.144303   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1047955b75c8"
	I1025 16:20:32.158514   13110 logs.go:123] Gathering logs for coredns [e387746d72b2] ...
	I1025 16:20:32.158527   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e387746d72b2"
	I1025 16:20:32.174643   13110 logs.go:123] Gathering logs for coredns [4a027988dfff] ...
	I1025 16:20:32.174656   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a027988dfff"
	I1025 16:20:32.186401   13110 logs.go:123] Gathering logs for kube-controller-manager [2c7994bb1341] ...
	I1025 16:20:32.186413   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c7994bb1341"
	I1025 16:20:32.208775   13110 logs.go:123] Gathering logs for storage-provisioner [136604e1b2f3] ...
	I1025 16:20:32.208784   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136604e1b2f3"
	I1025 16:20:32.220586   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:20:32.220599   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:20:32.246394   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:20:32.246401   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:20:32.283787   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:20:32.283795   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:20:32.288028   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:20:32.288037   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:20:32.322055   13110 logs.go:123] Gathering logs for kube-apiserver [a9c44d19c2b3] ...
	I1025 16:20:32.322068   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c44d19c2b3"
	I1025 16:20:32.341819   13110 logs.go:123] Gathering logs for kube-scheduler [b2a2f830ccab] ...
	I1025 16:20:32.341831   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2a2f830ccab"
	I1025 16:20:32.356628   13110 logs.go:123] Gathering logs for kube-proxy [b608fbf7ec7a] ...
	I1025 16:20:32.356639   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b608fbf7ec7a"
	I1025 16:20:32.369279   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:20:32.369291   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:20:34.882457   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:20:39.884753   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:20:39.885221   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:20:39.924891   13110 logs.go:282] 1 containers: [a9c44d19c2b3]
	I1025 16:20:39.925047   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:20:39.948070   13110 logs.go:282] 1 containers: [1047955b75c8]
	I1025 16:20:39.948208   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:20:39.965685   13110 logs.go:282] 2 containers: [e387746d72b2 4a027988dfff]
	I1025 16:20:39.965775   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:20:39.977578   13110 logs.go:282] 1 containers: [b2a2f830ccab]
	I1025 16:20:39.977653   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:20:39.988211   13110 logs.go:282] 1 containers: [b608fbf7ec7a]
	I1025 16:20:39.988288   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:20:39.999012   13110 logs.go:282] 1 containers: [2c7994bb1341]
	I1025 16:20:39.999091   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:20:40.009474   13110 logs.go:282] 0 containers: []
	W1025 16:20:40.009488   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:20:40.009554   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:20:40.020004   13110 logs.go:282] 1 containers: [136604e1b2f3]
	I1025 16:20:40.020020   13110 logs.go:123] Gathering logs for kube-apiserver [a9c44d19c2b3] ...
	I1025 16:20:40.020026   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c44d19c2b3"
	I1025 16:20:40.037074   13110 logs.go:123] Gathering logs for etcd [1047955b75c8] ...
	I1025 16:20:40.037083   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1047955b75c8"
	I1025 16:20:40.050901   13110 logs.go:123] Gathering logs for kube-controller-manager [2c7994bb1341] ...
	I1025 16:20:40.050913   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c7994bb1341"
	I1025 16:20:40.068826   13110 logs.go:123] Gathering logs for storage-provisioner [136604e1b2f3] ...
	I1025 16:20:40.068838   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136604e1b2f3"
	I1025 16:20:40.081752   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:20:40.081763   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:20:40.106224   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:20:40.106230   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:20:40.143677   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:20:40.143683   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:20:40.180671   13110 logs.go:123] Gathering logs for coredns [4a027988dfff] ...
	I1025 16:20:40.180682   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a027988dfff"
	I1025 16:20:40.192698   13110 logs.go:123] Gathering logs for kube-scheduler [b2a2f830ccab] ...
	I1025 16:20:40.192708   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2a2f830ccab"
	I1025 16:20:40.207021   13110 logs.go:123] Gathering logs for kube-proxy [b608fbf7ec7a] ...
	I1025 16:20:40.207031   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b608fbf7ec7a"
	I1025 16:20:40.218557   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:20:40.218567   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:20:40.229960   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:20:40.229973   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:20:40.234166   13110 logs.go:123] Gathering logs for coredns [e387746d72b2] ...
	I1025 16:20:40.234176   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e387746d72b2"
	I1025 16:20:42.747805   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:20:47.750203   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:20:47.750533   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:20:47.780378   13110 logs.go:282] 1 containers: [a9c44d19c2b3]
	I1025 16:20:47.780519   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:20:47.800234   13110 logs.go:282] 1 containers: [1047955b75c8]
	I1025 16:20:47.800332   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:20:47.813511   13110 logs.go:282] 2 containers: [e387746d72b2 4a027988dfff]
	I1025 16:20:47.813594   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:20:47.825216   13110 logs.go:282] 1 containers: [b2a2f830ccab]
	I1025 16:20:47.825287   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:20:47.836027   13110 logs.go:282] 1 containers: [b608fbf7ec7a]
	I1025 16:20:47.836109   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:20:47.846275   13110 logs.go:282] 1 containers: [2c7994bb1341]
	I1025 16:20:47.846341   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:20:47.856047   13110 logs.go:282] 0 containers: []
	W1025 16:20:47.856060   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:20:47.856124   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:20:47.866172   13110 logs.go:282] 1 containers: [136604e1b2f3]
	I1025 16:20:47.866186   13110 logs.go:123] Gathering logs for kube-apiserver [a9c44d19c2b3] ...
	I1025 16:20:47.866192   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c44d19c2b3"
	I1025 16:20:47.881019   13110 logs.go:123] Gathering logs for etcd [1047955b75c8] ...
	I1025 16:20:47.881030   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1047955b75c8"
	I1025 16:20:47.895146   13110 logs.go:123] Gathering logs for coredns [e387746d72b2] ...
	I1025 16:20:47.895156   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e387746d72b2"
	I1025 16:20:47.906436   13110 logs.go:123] Gathering logs for kube-proxy [b608fbf7ec7a] ...
	I1025 16:20:47.906446   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b608fbf7ec7a"
	I1025 16:20:47.918088   13110 logs.go:123] Gathering logs for kube-controller-manager [2c7994bb1341] ...
	I1025 16:20:47.918099   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c7994bb1341"
	I1025 16:20:47.935419   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:20:47.935429   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:20:47.959403   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:20:47.959412   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:20:47.963670   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:20:47.963678   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:20:47.997692   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:20:47.997705   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:20:48.008863   13110 logs.go:123] Gathering logs for kube-scheduler [b2a2f830ccab] ...
	I1025 16:20:48.008873   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2a2f830ccab"
	I1025 16:20:48.023713   13110 logs.go:123] Gathering logs for storage-provisioner [136604e1b2f3] ...
	I1025 16:20:48.023725   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136604e1b2f3"
	I1025 16:20:48.036193   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:20:48.036203   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:20:48.074686   13110 logs.go:123] Gathering logs for coredns [4a027988dfff] ...
	I1025 16:20:48.074697   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a027988dfff"
	I1025 16:20:50.586323   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:20:55.588576   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:20:55.589148   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:20:55.628709   13110 logs.go:282] 1 containers: [a9c44d19c2b3]
	I1025 16:20:55.628892   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:20:55.650782   13110 logs.go:282] 1 containers: [1047955b75c8]
	I1025 16:20:55.650902   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:20:55.666518   13110 logs.go:282] 2 containers: [e387746d72b2 4a027988dfff]
	I1025 16:20:55.666607   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:20:55.680903   13110 logs.go:282] 1 containers: [b2a2f830ccab]
	I1025 16:20:55.680986   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:20:55.694717   13110 logs.go:282] 1 containers: [b608fbf7ec7a]
	I1025 16:20:55.694792   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:20:55.705475   13110 logs.go:282] 1 containers: [2c7994bb1341]
	I1025 16:20:55.705554   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:20:55.715470   13110 logs.go:282] 0 containers: []
	W1025 16:20:55.715482   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:20:55.715537   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:20:55.726074   13110 logs.go:282] 1 containers: [136604e1b2f3]
	I1025 16:20:55.726094   13110 logs.go:123] Gathering logs for coredns [4a027988dfff] ...
	I1025 16:20:55.726099   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a027988dfff"
	I1025 16:20:55.737700   13110 logs.go:123] Gathering logs for kube-proxy [b608fbf7ec7a] ...
	I1025 16:20:55.737710   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b608fbf7ec7a"
	I1025 16:20:55.749059   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:20:55.749072   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:20:55.760525   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:20:55.760539   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:20:55.796893   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:20:55.796899   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:20:55.831485   13110 logs.go:123] Gathering logs for etcd [1047955b75c8] ...
	I1025 16:20:55.831494   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1047955b75c8"
	I1025 16:20:55.845456   13110 logs.go:123] Gathering logs for kube-scheduler [b2a2f830ccab] ...
	I1025 16:20:55.845468   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2a2f830ccab"
	I1025 16:20:55.863873   13110 logs.go:123] Gathering logs for kube-controller-manager [2c7994bb1341] ...
	I1025 16:20:55.863886   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c7994bb1341"
	I1025 16:20:55.880735   13110 logs.go:123] Gathering logs for storage-provisioner [136604e1b2f3] ...
	I1025 16:20:55.880747   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136604e1b2f3"
	I1025 16:20:55.892762   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:20:55.892771   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:20:55.916442   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:20:55.916450   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:20:55.921061   13110 logs.go:123] Gathering logs for kube-apiserver [a9c44d19c2b3] ...
	I1025 16:20:55.921068   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c44d19c2b3"
	I1025 16:20:55.935539   13110 logs.go:123] Gathering logs for coredns [e387746d72b2] ...
	I1025 16:20:55.935551   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e387746d72b2"
	I1025 16:20:58.448946   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:21:03.451427   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:21:03.451948   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:21:03.490596   13110 logs.go:282] 1 containers: [a9c44d19c2b3]
	I1025 16:21:03.490754   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:21:03.511723   13110 logs.go:282] 1 containers: [1047955b75c8]
	I1025 16:21:03.511854   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:21:03.526766   13110 logs.go:282] 2 containers: [e387746d72b2 4a027988dfff]
	I1025 16:21:03.526851   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:21:03.539403   13110 logs.go:282] 1 containers: [b2a2f830ccab]
	I1025 16:21:03.539489   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:21:03.550114   13110 logs.go:282] 1 containers: [b608fbf7ec7a]
	I1025 16:21:03.550191   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:21:03.564597   13110 logs.go:282] 1 containers: [2c7994bb1341]
	I1025 16:21:03.564675   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:21:03.574792   13110 logs.go:282] 0 containers: []
	W1025 16:21:03.574805   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:21:03.574871   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:21:03.584847   13110 logs.go:282] 1 containers: [136604e1b2f3]
	I1025 16:21:03.584863   13110 logs.go:123] Gathering logs for coredns [4a027988dfff] ...
	I1025 16:21:03.584868   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a027988dfff"
	I1025 16:21:03.596557   13110 logs.go:123] Gathering logs for kube-scheduler [b2a2f830ccab] ...
	I1025 16:21:03.596568   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2a2f830ccab"
	I1025 16:21:03.611275   13110 logs.go:123] Gathering logs for kube-proxy [b608fbf7ec7a] ...
	I1025 16:21:03.611287   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b608fbf7ec7a"
	I1025 16:21:03.623542   13110 logs.go:123] Gathering logs for kube-controller-manager [2c7994bb1341] ...
	I1025 16:21:03.623553   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c7994bb1341"
	I1025 16:21:03.641150   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:21:03.641161   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:21:03.665787   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:21:03.665794   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:21:03.677535   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:21:03.677546   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:21:03.715963   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:21:03.715975   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:21:03.751654   13110 logs.go:123] Gathering logs for kube-apiserver [a9c44d19c2b3] ...
	I1025 16:21:03.751668   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c44d19c2b3"
	I1025 16:21:03.770236   13110 logs.go:123] Gathering logs for etcd [1047955b75c8] ...
	I1025 16:21:03.770248   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1047955b75c8"
	I1025 16:21:03.788429   13110 logs.go:123] Gathering logs for coredns [e387746d72b2] ...
	I1025 16:21:03.788440   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e387746d72b2"
	I1025 16:21:03.799699   13110 logs.go:123] Gathering logs for storage-provisioner [136604e1b2f3] ...
	I1025 16:21:03.799709   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136604e1b2f3"
	I1025 16:21:03.811449   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:21:03.811461   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:21:06.318101   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:21:11.320134   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:21:11.320464   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:21:11.351164   13110 logs.go:282] 1 containers: [a9c44d19c2b3]
	I1025 16:21:11.351277   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:21:11.367358   13110 logs.go:282] 1 containers: [1047955b75c8]
	I1025 16:21:11.367443   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:21:11.380844   13110 logs.go:282] 2 containers: [e387746d72b2 4a027988dfff]
	I1025 16:21:11.380928   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:21:11.392620   13110 logs.go:282] 1 containers: [b2a2f830ccab]
	I1025 16:21:11.392696   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:21:11.403661   13110 logs.go:282] 1 containers: [b608fbf7ec7a]
	I1025 16:21:11.403744   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:21:11.414381   13110 logs.go:282] 1 containers: [2c7994bb1341]
	I1025 16:21:11.414455   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:21:11.425156   13110 logs.go:282] 0 containers: []
	W1025 16:21:11.425167   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:21:11.425227   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:21:11.436310   13110 logs.go:282] 1 containers: [136604e1b2f3]
	I1025 16:21:11.436328   13110 logs.go:123] Gathering logs for coredns [e387746d72b2] ...
	I1025 16:21:11.436333   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e387746d72b2"
	I1025 16:21:11.448782   13110 logs.go:123] Gathering logs for coredns [4a027988dfff] ...
	I1025 16:21:11.448793   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a027988dfff"
	I1025 16:21:11.461153   13110 logs.go:123] Gathering logs for kube-scheduler [b2a2f830ccab] ...
	I1025 16:21:11.461164   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2a2f830ccab"
	I1025 16:21:11.476133   13110 logs.go:123] Gathering logs for kube-proxy [b608fbf7ec7a] ...
	I1025 16:21:11.476145   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b608fbf7ec7a"
	I1025 16:21:11.488302   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:21:11.488317   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:21:11.525567   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:21:11.525576   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:21:11.529785   13110 logs.go:123] Gathering logs for kube-apiserver [a9c44d19c2b3] ...
	I1025 16:21:11.529796   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c44d19c2b3"
	I1025 16:21:11.544734   13110 logs.go:123] Gathering logs for etcd [1047955b75c8] ...
	I1025 16:21:11.544746   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1047955b75c8"
	I1025 16:21:11.566154   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:21:11.566166   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:21:11.590697   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:21:11.590706   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:21:11.626978   13110 logs.go:123] Gathering logs for kube-controller-manager [2c7994bb1341] ...
	I1025 16:21:11.626992   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c7994bb1341"
	I1025 16:21:11.649362   13110 logs.go:123] Gathering logs for storage-provisioner [136604e1b2f3] ...
	I1025 16:21:11.649372   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136604e1b2f3"
	I1025 16:21:11.661608   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:21:11.661617   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:21:14.175611   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:21:19.177798   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:21:19.177918   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:21:19.196332   13110 logs.go:282] 1 containers: [a9c44d19c2b3]
	I1025 16:21:19.196408   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:21:19.215999   13110 logs.go:282] 1 containers: [1047955b75c8]
	I1025 16:21:19.216073   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:21:19.229263   13110 logs.go:282] 3 containers: [fa5fa557ec92 e387746d72b2 4a027988dfff]
	I1025 16:21:19.229349   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:21:19.240266   13110 logs.go:282] 1 containers: [b2a2f830ccab]
	I1025 16:21:19.240336   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:21:19.258498   13110 logs.go:282] 1 containers: [b608fbf7ec7a]
	I1025 16:21:19.258567   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:21:19.269846   13110 logs.go:282] 1 containers: [2c7994bb1341]
	I1025 16:21:19.269939   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:21:19.280472   13110 logs.go:282] 0 containers: []
	W1025 16:21:19.280486   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:21:19.280545   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:21:19.292011   13110 logs.go:282] 1 containers: [136604e1b2f3]
	I1025 16:21:19.292032   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:21:19.292038   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:21:19.296763   13110 logs.go:123] Gathering logs for coredns [fa5fa557ec92] ...
	I1025 16:21:19.296773   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa5fa557ec92"
	I1025 16:21:19.308949   13110 logs.go:123] Gathering logs for coredns [e387746d72b2] ...
	I1025 16:21:19.308961   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e387746d72b2"
	I1025 16:21:19.321093   13110 logs.go:123] Gathering logs for coredns [4a027988dfff] ...
	I1025 16:21:19.321105   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a027988dfff"
	I1025 16:21:19.334002   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:21:19.334012   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:21:19.371928   13110 logs.go:123] Gathering logs for kube-apiserver [a9c44d19c2b3] ...
	I1025 16:21:19.371935   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c44d19c2b3"
	I1025 16:21:19.386487   13110 logs.go:123] Gathering logs for storage-provisioner [136604e1b2f3] ...
	I1025 16:21:19.386498   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136604e1b2f3"
	I1025 16:21:19.400127   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:21:19.400141   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:21:19.425658   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:21:19.425668   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:21:19.460823   13110 logs.go:123] Gathering logs for kube-scheduler [b2a2f830ccab] ...
	I1025 16:21:19.460836   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2a2f830ccab"
	I1025 16:21:19.475985   13110 logs.go:123] Gathering logs for kube-proxy [b608fbf7ec7a] ...
	I1025 16:21:19.475996   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b608fbf7ec7a"
	I1025 16:21:19.488398   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:21:19.488408   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:21:19.500678   13110 logs.go:123] Gathering logs for etcd [1047955b75c8] ...
	I1025 16:21:19.500690   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1047955b75c8"
	I1025 16:21:19.516620   13110 logs.go:123] Gathering logs for kube-controller-manager [2c7994bb1341] ...
	I1025 16:21:19.516629   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c7994bb1341"
	I1025 16:21:22.037187   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:21:27.040067   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:21:27.040656   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:21:27.081057   13110 logs.go:282] 1 containers: [a9c44d19c2b3]
	I1025 16:21:27.081221   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:21:27.104444   13110 logs.go:282] 1 containers: [1047955b75c8]
	I1025 16:21:27.104572   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:21:27.121034   13110 logs.go:282] 4 containers: [2f0fbaae3e89 fa5fa557ec92 e387746d72b2 4a027988dfff]
	I1025 16:21:27.121145   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:21:27.134697   13110 logs.go:282] 1 containers: [b2a2f830ccab]
	I1025 16:21:27.134778   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:21:27.146040   13110 logs.go:282] 1 containers: [b608fbf7ec7a]
	I1025 16:21:27.146119   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:21:27.157791   13110 logs.go:282] 1 containers: [2c7994bb1341]
	I1025 16:21:27.157870   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:21:27.169287   13110 logs.go:282] 0 containers: []
	W1025 16:21:27.169299   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:21:27.169361   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:21:27.181188   13110 logs.go:282] 1 containers: [136604e1b2f3]
	I1025 16:21:27.181203   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:21:27.181208   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:21:27.217385   13110 logs.go:123] Gathering logs for etcd [1047955b75c8] ...
	I1025 16:21:27.217399   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1047955b75c8"
	I1025 16:21:27.232668   13110 logs.go:123] Gathering logs for coredns [4a027988dfff] ...
	I1025 16:21:27.232679   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a027988dfff"
	I1025 16:21:27.247115   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:21:27.247127   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:21:27.259544   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:21:27.259558   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:21:27.296214   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:21:27.296223   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:21:27.300844   13110 logs.go:123] Gathering logs for coredns [2f0fbaae3e89] ...
	I1025 16:21:27.300852   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0fbaae3e89"
	I1025 16:21:27.313260   13110 logs.go:123] Gathering logs for kube-controller-manager [2c7994bb1341] ...
	I1025 16:21:27.313271   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c7994bb1341"
	I1025 16:21:27.332704   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:21:27.332717   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:21:27.357636   13110 logs.go:123] Gathering logs for coredns [e387746d72b2] ...
	I1025 16:21:27.357645   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e387746d72b2"
	I1025 16:21:27.369796   13110 logs.go:123] Gathering logs for kube-scheduler [b2a2f830ccab] ...
	I1025 16:21:27.369805   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2a2f830ccab"
	I1025 16:21:27.385659   13110 logs.go:123] Gathering logs for storage-provisioner [136604e1b2f3] ...
	I1025 16:21:27.385669   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136604e1b2f3"
	I1025 16:21:27.397862   13110 logs.go:123] Gathering logs for kube-apiserver [a9c44d19c2b3] ...
	I1025 16:21:27.397874   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c44d19c2b3"
	I1025 16:21:27.412373   13110 logs.go:123] Gathering logs for coredns [fa5fa557ec92] ...
	I1025 16:21:27.412383   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa5fa557ec92"
	I1025 16:21:27.424576   13110 logs.go:123] Gathering logs for kube-proxy [b608fbf7ec7a] ...
	I1025 16:21:27.424588   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b608fbf7ec7a"
	I1025 16:21:29.941700   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:21:34.944606   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:21:34.945164   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:21:34.990196   13110 logs.go:282] 1 containers: [a9c44d19c2b3]
	I1025 16:21:34.990343   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:21:35.012869   13110 logs.go:282] 1 containers: [1047955b75c8]
	I1025 16:21:35.012977   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:21:35.028143   13110 logs.go:282] 4 containers: [2f0fbaae3e89 fa5fa557ec92 e387746d72b2 4a027988dfff]
	I1025 16:21:35.028236   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:21:35.040764   13110 logs.go:282] 1 containers: [b2a2f830ccab]
	I1025 16:21:35.040840   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:21:35.052237   13110 logs.go:282] 1 containers: [b608fbf7ec7a]
	I1025 16:21:35.052306   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:21:35.063655   13110 logs.go:282] 1 containers: [2c7994bb1341]
	I1025 16:21:35.063725   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:21:35.074523   13110 logs.go:282] 0 containers: []
	W1025 16:21:35.074535   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:21:35.074602   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:21:35.085896   13110 logs.go:282] 1 containers: [136604e1b2f3]
	I1025 16:21:35.085918   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:21:35.085924   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:21:35.122095   13110 logs.go:123] Gathering logs for kube-apiserver [a9c44d19c2b3] ...
	I1025 16:21:35.122102   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c44d19c2b3"
	I1025 16:21:35.136990   13110 logs.go:123] Gathering logs for coredns [4a027988dfff] ...
	I1025 16:21:35.137003   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a027988dfff"
	I1025 16:21:35.149320   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:21:35.149330   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:21:35.153551   13110 logs.go:123] Gathering logs for coredns [2f0fbaae3e89] ...
	I1025 16:21:35.153559   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0fbaae3e89"
	I1025 16:21:35.166679   13110 logs.go:123] Gathering logs for coredns [e387746d72b2] ...
	I1025 16:21:35.166692   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e387746d72b2"
	I1025 16:21:35.179689   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:21:35.179701   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:21:35.192228   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:21:35.192242   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:21:35.231239   13110 logs.go:123] Gathering logs for coredns [fa5fa557ec92] ...
	I1025 16:21:35.231252   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa5fa557ec92"
	I1025 16:21:35.243887   13110 logs.go:123] Gathering logs for kube-scheduler [b2a2f830ccab] ...
	I1025 16:21:35.243901   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2a2f830ccab"
	I1025 16:21:35.259010   13110 logs.go:123] Gathering logs for kube-proxy [b608fbf7ec7a] ...
	I1025 16:21:35.259019   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b608fbf7ec7a"
	I1025 16:21:35.271435   13110 logs.go:123] Gathering logs for kube-controller-manager [2c7994bb1341] ...
	I1025 16:21:35.271448   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c7994bb1341"
	I1025 16:21:35.289945   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:21:35.289957   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:21:35.314428   13110 logs.go:123] Gathering logs for etcd [1047955b75c8] ...
	I1025 16:21:35.314435   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1047955b75c8"
	I1025 16:21:35.330103   13110 logs.go:123] Gathering logs for storage-provisioner [136604e1b2f3] ...
	I1025 16:21:35.330115   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136604e1b2f3"
	I1025 16:21:37.845561   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:21:42.847819   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:21:42.848274   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:21:42.884542   13110 logs.go:282] 1 containers: [a9c44d19c2b3]
	I1025 16:21:42.884700   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:21:42.906078   13110 logs.go:282] 1 containers: [1047955b75c8]
	I1025 16:21:42.906202   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:21:42.921327   13110 logs.go:282] 4 containers: [2f0fbaae3e89 fa5fa557ec92 e387746d72b2 4a027988dfff]
	I1025 16:21:42.921410   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:21:42.935049   13110 logs.go:282] 1 containers: [b2a2f830ccab]
	I1025 16:21:42.935123   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:21:42.946523   13110 logs.go:282] 1 containers: [b608fbf7ec7a]
	I1025 16:21:42.946610   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:21:42.965230   13110 logs.go:282] 1 containers: [2c7994bb1341]
	I1025 16:21:42.965328   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:21:42.976251   13110 logs.go:282] 0 containers: []
	W1025 16:21:42.976264   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:21:42.976321   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:21:42.986969   13110 logs.go:282] 1 containers: [136604e1b2f3]
	I1025 16:21:42.986987   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:21:42.986995   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:21:43.022821   13110 logs.go:123] Gathering logs for kube-apiserver [a9c44d19c2b3] ...
	I1025 16:21:43.022832   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c44d19c2b3"
	I1025 16:21:43.037354   13110 logs.go:123] Gathering logs for etcd [1047955b75c8] ...
	I1025 16:21:43.037368   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1047955b75c8"
	I1025 16:21:43.051733   13110 logs.go:123] Gathering logs for coredns [e387746d72b2] ...
	I1025 16:21:43.051745   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e387746d72b2"
	I1025 16:21:43.063890   13110 logs.go:123] Gathering logs for kube-scheduler [b2a2f830ccab] ...
	I1025 16:21:43.063902   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2a2f830ccab"
	I1025 16:21:43.078760   13110 logs.go:123] Gathering logs for kube-proxy [b608fbf7ec7a] ...
	I1025 16:21:43.078772   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b608fbf7ec7a"
	I1025 16:21:43.090770   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:21:43.090778   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:21:43.102523   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:21:43.102534   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:21:43.138270   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:21:43.138280   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:21:43.142349   13110 logs.go:123] Gathering logs for coredns [2f0fbaae3e89] ...
	I1025 16:21:43.142354   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0fbaae3e89"
	I1025 16:21:43.154043   13110 logs.go:123] Gathering logs for kube-controller-manager [2c7994bb1341] ...
	I1025 16:21:43.154054   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c7994bb1341"
	I1025 16:21:43.172357   13110 logs.go:123] Gathering logs for coredns [fa5fa557ec92] ...
	I1025 16:21:43.172370   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa5fa557ec92"
	I1025 16:21:43.185973   13110 logs.go:123] Gathering logs for coredns [4a027988dfff] ...
	I1025 16:21:43.185985   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a027988dfff"
	I1025 16:21:43.198055   13110 logs.go:123] Gathering logs for storage-provisioner [136604e1b2f3] ...
	I1025 16:21:43.198067   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136604e1b2f3"
	I1025 16:21:43.209853   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:21:43.209866   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:21:45.736932   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:21:50.739136   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:21:50.739246   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:21:50.751534   13110 logs.go:282] 1 containers: [a9c44d19c2b3]
	I1025 16:21:50.751618   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:21:50.769795   13110 logs.go:282] 1 containers: [1047955b75c8]
	I1025 16:21:50.769885   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:21:50.783843   13110 logs.go:282] 4 containers: [2f0fbaae3e89 fa5fa557ec92 e387746d72b2 4a027988dfff]
	I1025 16:21:50.783925   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:21:50.796411   13110 logs.go:282] 1 containers: [b2a2f830ccab]
	I1025 16:21:50.796499   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:21:50.813651   13110 logs.go:282] 1 containers: [b608fbf7ec7a]
	I1025 16:21:50.813735   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:21:50.829936   13110 logs.go:282] 1 containers: [2c7994bb1341]
	I1025 16:21:50.830017   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:21:50.842851   13110 logs.go:282] 0 containers: []
	W1025 16:21:50.842865   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:21:50.842936   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:21:50.863269   13110 logs.go:282] 1 containers: [136604e1b2f3]
	I1025 16:21:50.863290   13110 logs.go:123] Gathering logs for kube-controller-manager [2c7994bb1341] ...
	I1025 16:21:50.863296   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c7994bb1341"
	I1025 16:21:50.887428   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:21:50.887447   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:21:50.912677   13110 logs.go:123] Gathering logs for kube-apiserver [a9c44d19c2b3] ...
	I1025 16:21:50.912690   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c44d19c2b3"
	I1025 16:21:50.927663   13110 logs.go:123] Gathering logs for coredns [4a027988dfff] ...
	I1025 16:21:50.927677   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a027988dfff"
	I1025 16:21:50.940045   13110 logs.go:123] Gathering logs for kube-scheduler [b2a2f830ccab] ...
	I1025 16:21:50.940056   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2a2f830ccab"
	I1025 16:21:50.954740   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:21:50.954758   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:21:50.966969   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:21:50.966980   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:21:51.006199   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:21:51.006213   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:21:51.012049   13110 logs.go:123] Gathering logs for coredns [e387746d72b2] ...
	I1025 16:21:51.012058   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e387746d72b2"
	I1025 16:21:51.026485   13110 logs.go:123] Gathering logs for kube-proxy [b608fbf7ec7a] ...
	I1025 16:21:51.026497   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b608fbf7ec7a"
	I1025 16:21:51.037842   13110 logs.go:123] Gathering logs for storage-provisioner [136604e1b2f3] ...
	I1025 16:21:51.037854   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136604e1b2f3"
	I1025 16:21:51.050176   13110 logs.go:123] Gathering logs for etcd [1047955b75c8] ...
	I1025 16:21:51.050187   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1047955b75c8"
	I1025 16:21:51.064022   13110 logs.go:123] Gathering logs for coredns [2f0fbaae3e89] ...
	I1025 16:21:51.064032   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0fbaae3e89"
	I1025 16:21:51.076206   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:21:51.076217   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:21:51.110560   13110 logs.go:123] Gathering logs for coredns [fa5fa557ec92] ...
	I1025 16:21:51.110572   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa5fa557ec92"
	I1025 16:21:53.627866   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:21:58.630710   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:21:58.630868   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:21:58.642875   13110 logs.go:282] 1 containers: [a9c44d19c2b3]
	I1025 16:21:58.642957   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:21:58.653528   13110 logs.go:282] 1 containers: [1047955b75c8]
	I1025 16:21:58.653604   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:21:58.664157   13110 logs.go:282] 4 containers: [2f0fbaae3e89 fa5fa557ec92 e387746d72b2 4a027988dfff]
	I1025 16:21:58.664234   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:21:58.674410   13110 logs.go:282] 1 containers: [b2a2f830ccab]
	I1025 16:21:58.674478   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:21:58.684439   13110 logs.go:282] 1 containers: [b608fbf7ec7a]
	I1025 16:21:58.684516   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:21:58.695105   13110 logs.go:282] 1 containers: [2c7994bb1341]
	I1025 16:21:58.695180   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:21:58.705333   13110 logs.go:282] 0 containers: []
	W1025 16:21:58.705344   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:21:58.705403   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:21:58.716184   13110 logs.go:282] 1 containers: [136604e1b2f3]
	I1025 16:21:58.716202   13110 logs.go:123] Gathering logs for coredns [4a027988dfff] ...
	I1025 16:21:58.716208   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a027988dfff"
	I1025 16:21:58.727873   13110 logs.go:123] Gathering logs for kube-controller-manager [2c7994bb1341] ...
	I1025 16:21:58.727885   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c7994bb1341"
	I1025 16:21:58.744393   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:21:58.744403   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:21:58.759053   13110 logs.go:123] Gathering logs for coredns [fa5fa557ec92] ...
	I1025 16:21:58.759062   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa5fa557ec92"
	I1025 16:21:58.770526   13110 logs.go:123] Gathering logs for kube-apiserver [a9c44d19c2b3] ...
	I1025 16:21:58.770538   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c44d19c2b3"
	I1025 16:21:58.784279   13110 logs.go:123] Gathering logs for etcd [1047955b75c8] ...
	I1025 16:21:58.784291   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1047955b75c8"
	I1025 16:21:58.798367   13110 logs.go:123] Gathering logs for storage-provisioner [136604e1b2f3] ...
	I1025 16:21:58.798379   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136604e1b2f3"
	I1025 16:21:58.809584   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:21:58.809596   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:21:58.847644   13110 logs.go:123] Gathering logs for coredns [e387746d72b2] ...
	I1025 16:21:58.847650   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e387746d72b2"
	I1025 16:21:58.858988   13110 logs.go:123] Gathering logs for kube-scheduler [b2a2f830ccab] ...
	I1025 16:21:58.858999   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2a2f830ccab"
	I1025 16:21:58.873306   13110 logs.go:123] Gathering logs for kube-proxy [b608fbf7ec7a] ...
	I1025 16:21:58.873319   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b608fbf7ec7a"
	I1025 16:21:58.885184   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:21:58.885196   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:21:58.909718   13110 logs.go:123] Gathering logs for coredns [2f0fbaae3e89] ...
	I1025 16:21:58.909726   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0fbaae3e89"
	I1025 16:21:58.921280   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:21:58.921293   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:21:58.960882   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:21:58.960893   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:22:01.467028   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:22:06.469287   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:22:06.469844   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:22:06.509051   13110 logs.go:282] 1 containers: [a9c44d19c2b3]
	I1025 16:22:06.509205   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:22:06.531891   13110 logs.go:282] 1 containers: [1047955b75c8]
	I1025 16:22:06.532021   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:22:06.547124   13110 logs.go:282] 4 containers: [2f0fbaae3e89 fa5fa557ec92 e387746d72b2 4a027988dfff]
	I1025 16:22:06.547199   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:22:06.561258   13110 logs.go:282] 1 containers: [b2a2f830ccab]
	I1025 16:22:06.561326   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:22:06.573603   13110 logs.go:282] 1 containers: [b608fbf7ec7a]
	I1025 16:22:06.573683   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:22:06.584234   13110 logs.go:282] 1 containers: [2c7994bb1341]
	I1025 16:22:06.584310   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:22:06.594453   13110 logs.go:282] 0 containers: []
	W1025 16:22:06.594471   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:22:06.594537   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:22:06.606347   13110 logs.go:282] 1 containers: [136604e1b2f3]
	I1025 16:22:06.606375   13110 logs.go:123] Gathering logs for coredns [e387746d72b2] ...
	I1025 16:22:06.606381   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e387746d72b2"
	I1025 16:22:06.622003   13110 logs.go:123] Gathering logs for kube-scheduler [b2a2f830ccab] ...
	I1025 16:22:06.622017   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2a2f830ccab"
	I1025 16:22:06.636651   13110 logs.go:123] Gathering logs for kube-controller-manager [2c7994bb1341] ...
	I1025 16:22:06.636664   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c7994bb1341"
	I1025 16:22:06.655644   13110 logs.go:123] Gathering logs for coredns [2f0fbaae3e89] ...
	I1025 16:22:06.655654   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0fbaae3e89"
	I1025 16:22:06.688577   13110 logs.go:123] Gathering logs for coredns [fa5fa557ec92] ...
	I1025 16:22:06.688591   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa5fa557ec92"
	I1025 16:22:06.701177   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:22:06.701188   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:22:06.705318   13110 logs.go:123] Gathering logs for coredns [4a027988dfff] ...
	I1025 16:22:06.705327   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a027988dfff"
	I1025 16:22:06.717585   13110 logs.go:123] Gathering logs for kube-proxy [b608fbf7ec7a] ...
	I1025 16:22:06.717597   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b608fbf7ec7a"
	I1025 16:22:06.729826   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:22:06.729838   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:22:06.754777   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:22:06.754786   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:22:06.766611   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:22:06.766621   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:22:06.803310   13110 logs.go:123] Gathering logs for kube-apiserver [a9c44d19c2b3] ...
	I1025 16:22:06.803323   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c44d19c2b3"
	I1025 16:22:06.818854   13110 logs.go:123] Gathering logs for storage-provisioner [136604e1b2f3] ...
	I1025 16:22:06.818864   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136604e1b2f3"
	I1025 16:22:06.834367   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:22:06.834379   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:22:06.871108   13110 logs.go:123] Gathering logs for etcd [1047955b75c8] ...
	I1025 16:22:06.871118   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1047955b75c8"
	I1025 16:22:09.391758   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:22:14.394375   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:22:14.394454   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:22:14.405441   13110 logs.go:282] 1 containers: [a9c44d19c2b3]
	I1025 16:22:14.405512   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:22:14.416492   13110 logs.go:282] 1 containers: [1047955b75c8]
	I1025 16:22:14.416563   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:22:14.429037   13110 logs.go:282] 4 containers: [2f0fbaae3e89 fa5fa557ec92 e387746d72b2 4a027988dfff]
	I1025 16:22:14.429111   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:22:14.440850   13110 logs.go:282] 1 containers: [b2a2f830ccab]
	I1025 16:22:14.440921   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:22:14.453372   13110 logs.go:282] 1 containers: [b608fbf7ec7a]
	I1025 16:22:14.453445   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:22:14.464716   13110 logs.go:282] 1 containers: [2c7994bb1341]
	I1025 16:22:14.464786   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:22:14.480899   13110 logs.go:282] 0 containers: []
	W1025 16:22:14.480914   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:22:14.480991   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:22:14.492474   13110 logs.go:282] 1 containers: [136604e1b2f3]
	I1025 16:22:14.492487   13110 logs.go:123] Gathering logs for kube-controller-manager [2c7994bb1341] ...
	I1025 16:22:14.492492   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c7994bb1341"
	I1025 16:22:14.510205   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:22:14.510219   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:22:14.524567   13110 logs.go:123] Gathering logs for coredns [fa5fa557ec92] ...
	I1025 16:22:14.524582   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa5fa557ec92"
	I1025 16:22:14.538596   13110 logs.go:123] Gathering logs for coredns [4a027988dfff] ...
	I1025 16:22:14.538608   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a027988dfff"
	I1025 16:22:14.555030   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:22:14.555043   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:22:14.579703   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:22:14.579715   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:22:14.584517   13110 logs.go:123] Gathering logs for coredns [e387746d72b2] ...
	I1025 16:22:14.584527   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e387746d72b2"
	I1025 16:22:14.596384   13110 logs.go:123] Gathering logs for coredns [2f0fbaae3e89] ...
	I1025 16:22:14.596393   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0fbaae3e89"
	I1025 16:22:14.608655   13110 logs.go:123] Gathering logs for kube-apiserver [a9c44d19c2b3] ...
	I1025 16:22:14.608666   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c44d19c2b3"
	I1025 16:22:14.624513   13110 logs.go:123] Gathering logs for etcd [1047955b75c8] ...
	I1025 16:22:14.624529   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1047955b75c8"
	I1025 16:22:14.639991   13110 logs.go:123] Gathering logs for kube-scheduler [b2a2f830ccab] ...
	I1025 16:22:14.639999   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2a2f830ccab"
	I1025 16:22:14.655832   13110 logs.go:123] Gathering logs for kube-proxy [b608fbf7ec7a] ...
	I1025 16:22:14.655849   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b608fbf7ec7a"
	I1025 16:22:14.670291   13110 logs.go:123] Gathering logs for storage-provisioner [136604e1b2f3] ...
	I1025 16:22:14.670303   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136604e1b2f3"
	I1025 16:22:14.684838   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:22:14.684849   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:22:14.722040   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:22:14.722056   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:22:17.262687   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:22:22.265343   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:22:22.265609   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:22:22.291947   13110 logs.go:282] 1 containers: [a9c44d19c2b3]
	I1025 16:22:22.292078   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:22:22.310050   13110 logs.go:282] 1 containers: [1047955b75c8]
	I1025 16:22:22.310141   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:22:22.322806   13110 logs.go:282] 4 containers: [2f0fbaae3e89 fa5fa557ec92 e387746d72b2 4a027988dfff]
	I1025 16:22:22.322895   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:22:22.333509   13110 logs.go:282] 1 containers: [b2a2f830ccab]
	I1025 16:22:22.333589   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:22:22.344118   13110 logs.go:282] 1 containers: [b608fbf7ec7a]
	I1025 16:22:22.344191   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:22:22.354691   13110 logs.go:282] 1 containers: [2c7994bb1341]
	I1025 16:22:22.354757   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:22:22.366177   13110 logs.go:282] 0 containers: []
	W1025 16:22:22.366191   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:22:22.366258   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:22:22.376727   13110 logs.go:282] 1 containers: [136604e1b2f3]
	I1025 16:22:22.376745   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:22:22.376751   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:22:22.410697   13110 logs.go:123] Gathering logs for kube-controller-manager [2c7994bb1341] ...
	I1025 16:22:22.410710   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c7994bb1341"
	I1025 16:22:22.432674   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:22:22.432687   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:22:22.444496   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:22:22.444510   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:22:22.448719   13110 logs.go:123] Gathering logs for kube-apiserver [a9c44d19c2b3] ...
	I1025 16:22:22.448728   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c44d19c2b3"
	I1025 16:22:22.462750   13110 logs.go:123] Gathering logs for coredns [e387746d72b2] ...
	I1025 16:22:22.462758   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e387746d72b2"
	I1025 16:22:22.474734   13110 logs.go:123] Gathering logs for coredns [4a027988dfff] ...
	I1025 16:22:22.474744   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a027988dfff"
	I1025 16:22:22.487923   13110 logs.go:123] Gathering logs for etcd [1047955b75c8] ...
	I1025 16:22:22.487934   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1047955b75c8"
	I1025 16:22:22.503808   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:22:22.503828   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:22:22.529302   13110 logs.go:123] Gathering logs for kube-proxy [b608fbf7ec7a] ...
	I1025 16:22:22.529325   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b608fbf7ec7a"
	I1025 16:22:22.543002   13110 logs.go:123] Gathering logs for storage-provisioner [136604e1b2f3] ...
	I1025 16:22:22.543017   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136604e1b2f3"
	I1025 16:22:22.557709   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:22:22.557723   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:22:22.598816   13110 logs.go:123] Gathering logs for coredns [2f0fbaae3e89] ...
	I1025 16:22:22.598840   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0fbaae3e89"
	I1025 16:22:22.614660   13110 logs.go:123] Gathering logs for coredns [fa5fa557ec92] ...
	I1025 16:22:22.614671   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa5fa557ec92"
	I1025 16:22:22.628987   13110 logs.go:123] Gathering logs for kube-scheduler [b2a2f830ccab] ...
	I1025 16:22:22.629005   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2a2f830ccab"
	I1025 16:22:25.155905   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:22:30.158210   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:22:30.158669   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:22:30.199035   13110 logs.go:282] 1 containers: [a9c44d19c2b3]
	I1025 16:22:30.199181   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:22:30.218027   13110 logs.go:282] 1 containers: [1047955b75c8]
	I1025 16:22:30.218120   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:22:30.232143   13110 logs.go:282] 4 containers: [2f0fbaae3e89 fa5fa557ec92 e387746d72b2 4a027988dfff]
	I1025 16:22:30.232214   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:22:30.244218   13110 logs.go:282] 1 containers: [b2a2f830ccab]
	I1025 16:22:30.244298   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:22:30.258661   13110 logs.go:282] 1 containers: [b608fbf7ec7a]
	I1025 16:22:30.258735   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:22:30.269482   13110 logs.go:282] 1 containers: [2c7994bb1341]
	I1025 16:22:30.269556   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:22:30.279737   13110 logs.go:282] 0 containers: []
	W1025 16:22:30.279752   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:22:30.279821   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:22:30.290342   13110 logs.go:282] 1 containers: [136604e1b2f3]
	I1025 16:22:30.290359   13110 logs.go:123] Gathering logs for etcd [1047955b75c8] ...
	I1025 16:22:30.290365   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1047955b75c8"
	I1025 16:22:30.304869   13110 logs.go:123] Gathering logs for kube-scheduler [b2a2f830ccab] ...
	I1025 16:22:30.304879   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2a2f830ccab"
	I1025 16:22:30.319539   13110 logs.go:123] Gathering logs for kube-proxy [b608fbf7ec7a] ...
	I1025 16:22:30.319549   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b608fbf7ec7a"
	I1025 16:22:30.331706   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:22:30.331717   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:22:30.356538   13110 logs.go:123] Gathering logs for coredns [e387746d72b2] ...
	I1025 16:22:30.356547   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e387746d72b2"
	I1025 16:22:30.367870   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:22:30.367883   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:22:30.403977   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:22:30.403986   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:22:30.407972   13110 logs.go:123] Gathering logs for kube-apiserver [a9c44d19c2b3] ...
	I1025 16:22:30.407979   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c44d19c2b3"
	I1025 16:22:30.421819   13110 logs.go:123] Gathering logs for coredns [fa5fa557ec92] ...
	I1025 16:22:30.421830   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa5fa557ec92"
	I1025 16:22:30.434255   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:22:30.434265   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:22:30.468921   13110 logs.go:123] Gathering logs for coredns [4a027988dfff] ...
	I1025 16:22:30.468932   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a027988dfff"
	I1025 16:22:30.481136   13110 logs.go:123] Gathering logs for kube-controller-manager [2c7994bb1341] ...
	I1025 16:22:30.481148   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c7994bb1341"
	I1025 16:22:30.498565   13110 logs.go:123] Gathering logs for storage-provisioner [136604e1b2f3] ...
	I1025 16:22:30.498574   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136604e1b2f3"
	I1025 16:22:30.516554   13110 logs.go:123] Gathering logs for coredns [2f0fbaae3e89] ...
	I1025 16:22:30.516567   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0fbaae3e89"
	I1025 16:22:30.528555   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:22:30.528566   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:22:33.041448   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:22:38.042762   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:22:38.042838   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:22:38.055949   13110 logs.go:282] 1 containers: [a9c44d19c2b3]
	I1025 16:22:38.056022   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:22:38.066962   13110 logs.go:282] 1 containers: [1047955b75c8]
	I1025 16:22:38.067027   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:22:38.078113   13110 logs.go:282] 4 containers: [2f0fbaae3e89 fa5fa557ec92 e387746d72b2 4a027988dfff]
	I1025 16:22:38.078188   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:22:38.089699   13110 logs.go:282] 1 containers: [b2a2f830ccab]
	I1025 16:22:38.089771   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:22:38.101138   13110 logs.go:282] 1 containers: [b608fbf7ec7a]
	I1025 16:22:38.101205   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:22:38.112039   13110 logs.go:282] 1 containers: [2c7994bb1341]
	I1025 16:22:38.112105   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:22:38.122573   13110 logs.go:282] 0 containers: []
	W1025 16:22:38.122586   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:22:38.122655   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:22:38.133712   13110 logs.go:282] 1 containers: [136604e1b2f3]
	I1025 16:22:38.133730   13110 logs.go:123] Gathering logs for coredns [2f0fbaae3e89] ...
	I1025 16:22:38.133737   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0fbaae3e89"
	I1025 16:22:38.148482   13110 logs.go:123] Gathering logs for kube-proxy [b608fbf7ec7a] ...
	I1025 16:22:38.148501   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b608fbf7ec7a"
	I1025 16:22:38.164091   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:22:38.164099   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:22:38.176650   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:22:38.176662   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:22:38.213943   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:22:38.213958   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:22:38.219012   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:22:38.219020   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:22:38.257499   13110 logs.go:123] Gathering logs for coredns [e387746d72b2] ...
	I1025 16:22:38.257515   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e387746d72b2"
	I1025 16:22:38.270958   13110 logs.go:123] Gathering logs for kube-controller-manager [2c7994bb1341] ...
	I1025 16:22:38.270969   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c7994bb1341"
	I1025 16:22:38.288871   13110 logs.go:123] Gathering logs for storage-provisioner [136604e1b2f3] ...
	I1025 16:22:38.288885   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136604e1b2f3"
	I1025 16:22:38.302419   13110 logs.go:123] Gathering logs for coredns [fa5fa557ec92] ...
	I1025 16:22:38.302430   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa5fa557ec92"
	I1025 16:22:38.318516   13110 logs.go:123] Gathering logs for etcd [1047955b75c8] ...
	I1025 16:22:38.318528   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1047955b75c8"
	I1025 16:22:38.333912   13110 logs.go:123] Gathering logs for coredns [4a027988dfff] ...
	I1025 16:22:38.333922   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a027988dfff"
	I1025 16:22:38.346199   13110 logs.go:123] Gathering logs for kube-scheduler [b2a2f830ccab] ...
	I1025 16:22:38.346212   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2a2f830ccab"
	I1025 16:22:38.368561   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:22:38.368577   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:22:38.393365   13110 logs.go:123] Gathering logs for kube-apiserver [a9c44d19c2b3] ...
	I1025 16:22:38.393385   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c44d19c2b3"
	I1025 16:22:40.909809   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:22:45.912522   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:22:45.913079   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:22:45.955812   13110 logs.go:282] 1 containers: [a9c44d19c2b3]
	I1025 16:22:45.955968   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:22:45.982220   13110 logs.go:282] 1 containers: [1047955b75c8]
	I1025 16:22:45.982328   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:22:46.000456   13110 logs.go:282] 4 containers: [2f0fbaae3e89 fa5fa557ec92 e387746d72b2 4a027988dfff]
	I1025 16:22:46.000546   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:22:46.022367   13110 logs.go:282] 1 containers: [b2a2f830ccab]
	I1025 16:22:46.022447   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:22:46.034462   13110 logs.go:282] 1 containers: [b608fbf7ec7a]
	I1025 16:22:46.034546   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:22:46.044925   13110 logs.go:282] 1 containers: [2c7994bb1341]
	I1025 16:22:46.044993   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:22:46.055181   13110 logs.go:282] 0 containers: []
	W1025 16:22:46.055196   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:22:46.055270   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:22:46.065471   13110 logs.go:282] 1 containers: [136604e1b2f3]
	I1025 16:22:46.065490   13110 logs.go:123] Gathering logs for storage-provisioner [136604e1b2f3] ...
	I1025 16:22:46.065496   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136604e1b2f3"
	I1025 16:22:46.076895   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:22:46.076908   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:22:46.100619   13110 logs.go:123] Gathering logs for etcd [1047955b75c8] ...
	I1025 16:22:46.100629   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1047955b75c8"
	I1025 16:22:46.114555   13110 logs.go:123] Gathering logs for kube-proxy [b608fbf7ec7a] ...
	I1025 16:22:46.114568   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b608fbf7ec7a"
	I1025 16:22:46.126573   13110 logs.go:123] Gathering logs for coredns [fa5fa557ec92] ...
	I1025 16:22:46.126585   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa5fa557ec92"
	I1025 16:22:46.138743   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:22:46.138754   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:22:46.143135   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:22:46.143141   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:22:46.181162   13110 logs.go:123] Gathering logs for coredns [4a027988dfff] ...
	I1025 16:22:46.181174   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a027988dfff"
	I1025 16:22:46.193191   13110 logs.go:123] Gathering logs for kube-scheduler [b2a2f830ccab] ...
	I1025 16:22:46.193203   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2a2f830ccab"
	I1025 16:22:46.207786   13110 logs.go:123] Gathering logs for kube-controller-manager [2c7994bb1341] ...
	I1025 16:22:46.207798   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c7994bb1341"
	I1025 16:22:46.225262   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:22:46.225270   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:22:46.263475   13110 logs.go:123] Gathering logs for coredns [2f0fbaae3e89] ...
	I1025 16:22:46.263483   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0fbaae3e89"
	I1025 16:22:46.275470   13110 logs.go:123] Gathering logs for coredns [e387746d72b2] ...
	I1025 16:22:46.275480   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e387746d72b2"
	I1025 16:22:46.286828   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:22:46.286840   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:22:46.298111   13110 logs.go:123] Gathering logs for kube-apiserver [a9c44d19c2b3] ...
	I1025 16:22:46.298124   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c44d19c2b3"
	I1025 16:22:48.814062   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:22:53.816481   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:22:53.817033   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:22:53.856498   13110 logs.go:282] 1 containers: [a9c44d19c2b3]
	I1025 16:22:53.856647   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:22:53.877671   13110 logs.go:282] 1 containers: [1047955b75c8]
	I1025 16:22:53.877795   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:22:53.893639   13110 logs.go:282] 4 containers: [2f0fbaae3e89 fa5fa557ec92 e387746d72b2 4a027988dfff]
	I1025 16:22:53.893722   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:22:53.906069   13110 logs.go:282] 1 containers: [b2a2f830ccab]
	I1025 16:22:53.906154   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:22:53.916940   13110 logs.go:282] 1 containers: [b608fbf7ec7a]
	I1025 16:22:53.917019   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:22:53.927919   13110 logs.go:282] 1 containers: [2c7994bb1341]
	I1025 16:22:53.927995   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:22:53.938342   13110 logs.go:282] 0 containers: []
	W1025 16:22:53.938353   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:22:53.938416   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:22:53.948810   13110 logs.go:282] 1 containers: [136604e1b2f3]
	I1025 16:22:53.948830   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:22:53.948835   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:22:53.953546   13110 logs.go:123] Gathering logs for coredns [e387746d72b2] ...
	I1025 16:22:53.953553   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e387746d72b2"
	I1025 16:22:53.965644   13110 logs.go:123] Gathering logs for kube-controller-manager [2c7994bb1341] ...
	I1025 16:22:53.965655   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c7994bb1341"
	I1025 16:22:53.987018   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:22:53.987028   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:22:54.010411   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:22:54.010420   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:22:54.046621   13110 logs.go:123] Gathering logs for kube-apiserver [a9c44d19c2b3] ...
	I1025 16:22:54.046629   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c44d19c2b3"
	I1025 16:22:54.064383   13110 logs.go:123] Gathering logs for coredns [fa5fa557ec92] ...
	I1025 16:22:54.064397   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa5fa557ec92"
	I1025 16:22:54.075971   13110 logs.go:123] Gathering logs for kube-proxy [b608fbf7ec7a] ...
	I1025 16:22:54.075983   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b608fbf7ec7a"
	I1025 16:22:54.087433   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:22:54.087445   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:22:54.099103   13110 logs.go:123] Gathering logs for coredns [4a027988dfff] ...
	I1025 16:22:54.099119   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a027988dfff"
	I1025 16:22:54.111281   13110 logs.go:123] Gathering logs for kube-scheduler [b2a2f830ccab] ...
	I1025 16:22:54.111290   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2a2f830ccab"
	I1025 16:22:54.126605   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:22:54.126614   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:22:54.164519   13110 logs.go:123] Gathering logs for etcd [1047955b75c8] ...
	I1025 16:22:54.164532   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1047955b75c8"
	I1025 16:22:54.178858   13110 logs.go:123] Gathering logs for coredns [2f0fbaae3e89] ...
	I1025 16:22:54.178871   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0fbaae3e89"
	I1025 16:22:54.191963   13110 logs.go:123] Gathering logs for storage-provisioner [136604e1b2f3] ...
	I1025 16:22:54.191974   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136604e1b2f3"
	I1025 16:22:56.706321   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:23:01.709176   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:23:01.710449   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 16:23:01.750952   13110 logs.go:282] 1 containers: [a9c44d19c2b3]
	I1025 16:23:01.751121   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 16:23:01.772447   13110 logs.go:282] 1 containers: [1047955b75c8]
	I1025 16:23:01.772555   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 16:23:01.788216   13110 logs.go:282] 4 containers: [2f0fbaae3e89 fa5fa557ec92 e387746d72b2 4a027988dfff]
	I1025 16:23:01.788317   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 16:23:01.801380   13110 logs.go:282] 1 containers: [b2a2f830ccab]
	I1025 16:23:01.801486   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 16:23:01.813442   13110 logs.go:282] 1 containers: [b608fbf7ec7a]
	I1025 16:23:01.813517   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 16:23:01.825001   13110 logs.go:282] 1 containers: [2c7994bb1341]
	I1025 16:23:01.825068   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 16:23:01.836576   13110 logs.go:282] 0 containers: []
	W1025 16:23:01.836588   13110 logs.go:284] No container was found matching "kindnet"
	I1025 16:23:01.836662   13110 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 16:23:01.847957   13110 logs.go:282] 1 containers: [136604e1b2f3]
	I1025 16:23:01.847978   13110 logs.go:123] Gathering logs for coredns [e387746d72b2] ...
	I1025 16:23:01.847984   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e387746d72b2"
	I1025 16:23:01.860583   13110 logs.go:123] Gathering logs for coredns [4a027988dfff] ...
	I1025 16:23:01.860595   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a027988dfff"
	I1025 16:23:01.879465   13110 logs.go:123] Gathering logs for kube-scheduler [b2a2f830ccab] ...
	I1025 16:23:01.879478   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2a2f830ccab"
	I1025 16:23:01.895650   13110 logs.go:123] Gathering logs for kube-controller-manager [2c7994bb1341] ...
	I1025 16:23:01.895661   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c7994bb1341"
	I1025 16:23:01.914451   13110 logs.go:123] Gathering logs for coredns [2f0fbaae3e89] ...
	I1025 16:23:01.914467   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2f0fbaae3e89"
	I1025 16:23:01.927540   13110 logs.go:123] Gathering logs for coredns [fa5fa557ec92] ...
	I1025 16:23:01.927551   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa5fa557ec92"
	I1025 16:23:01.940785   13110 logs.go:123] Gathering logs for container status ...
	I1025 16:23:01.940797   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 16:23:01.955613   13110 logs.go:123] Gathering logs for kubelet ...
	I1025 16:23:01.955626   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 16:23:01.994642   13110 logs.go:123] Gathering logs for dmesg ...
	I1025 16:23:01.994664   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 16:23:01.999982   13110 logs.go:123] Gathering logs for describe nodes ...
	I1025 16:23:01.999996   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 16:23:02.053420   13110 logs.go:123] Gathering logs for etcd [1047955b75c8] ...
	I1025 16:23:02.053432   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1047955b75c8"
	I1025 16:23:02.068441   13110 logs.go:123] Gathering logs for kube-apiserver [a9c44d19c2b3] ...
	I1025 16:23:02.068453   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9c44d19c2b3"
	I1025 16:23:02.083816   13110 logs.go:123] Gathering logs for kube-proxy [b608fbf7ec7a] ...
	I1025 16:23:02.083828   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b608fbf7ec7a"
	I1025 16:23:02.099536   13110 logs.go:123] Gathering logs for storage-provisioner [136604e1b2f3] ...
	I1025 16:23:02.099546   13110 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136604e1b2f3"
	I1025 16:23:02.111793   13110 logs.go:123] Gathering logs for Docker ...
	I1025 16:23:02.111802   13110 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 16:23:04.637523   13110 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 16:23:09.639794   13110 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 16:23:09.647032   13110 out.go:201] 
	W1025 16:23:09.652423   13110 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1025 16:23:09.652464   13110 out.go:270] * 
	* 
	W1025 16:23:09.655224   13110 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 16:23:09.664049   13110 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-782000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (574.76s)

                                                
                                    
x
+
TestPause/serial/Start (10.01s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-752000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-752000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.936393042s)

                                                
                                                
-- stdout --
	* [pause-752000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-752000" primary control-plane node in "pause-752000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-752000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-752000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-752000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-752000 -n pause-752000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-752000 -n pause-752000: exit status 7 (69.995625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-752000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (10.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-999000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-999000 --driver=qemu2 : exit status 80 (10.087978042s)

                                                
                                                
-- stdout --
	* [NoKubernetes-999000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-999000" primary control-plane node in "NoKubernetes-999000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-999000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-999000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-999000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-999000 -n NoKubernetes-999000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-999000 -n NoKubernetes-999000: exit status 7 (71.494ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-999000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (10.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-999000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-999000 --no-kubernetes --driver=qemu2 : exit status 80 (5.266874584s)

                                                
                                                
-- stdout --
	* [NoKubernetes-999000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-999000
	* Restarting existing qemu2 VM for "NoKubernetes-999000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-999000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-999000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-999000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-999000 -n NoKubernetes-999000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-999000 -n NoKubernetes-999000: exit status 7 (62.374625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-999000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-999000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-999000 --no-kubernetes --driver=qemu2 : exit status 80 (5.256298375s)

                                                
                                                
-- stdout --
	* [NoKubernetes-999000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-999000
	* Restarting existing qemu2 VM for "NoKubernetes-999000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-999000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-999000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-999000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-999000 -n NoKubernetes-999000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-999000 -n NoKubernetes-999000: exit status 7 (56.038084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-999000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-999000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-999000 --driver=qemu2 : exit status 80 (5.261170959s)

                                                
                                                
-- stdout --
	* [NoKubernetes-999000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-999000
	* Restarting existing qemu2 VM for "NoKubernetes-999000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-999000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-999000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-999000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-999000 -n NoKubernetes-999000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-999000 -n NoKubernetes-999000: exit status 7 (57.968167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-999000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-864000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-864000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.880668709s)

                                                
                                                
-- stdout --
	* [auto-864000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-864000" primary control-plane node in "auto-864000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-864000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:21:13.695432   13289 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:21:13.695595   13289 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:21:13.695602   13289 out.go:358] Setting ErrFile to fd 2...
	I1025 16:21:13.695605   13289 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:21:13.695743   13289 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:21:13.696930   13289 out.go:352] Setting JSON to false
	I1025 16:21:13.714639   13289 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7711,"bootTime":1729890762,"procs":513,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 16:21:13.714718   13289 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 16:21:13.721416   13289 out.go:177] * [auto-864000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 16:21:13.729365   13289 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 16:21:13.729462   13289 notify.go:220] Checking for updates...
	I1025 16:21:13.736447   13289 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:21:13.739433   13289 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 16:21:13.742468   13289 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 16:21:13.745486   13289 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	I1025 16:21:13.748470   13289 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 16:21:13.751800   13289 config.go:182] Loaded profile config "multinode-747000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:21:13.751872   13289 config.go:182] Loaded profile config "stopped-upgrade-782000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 16:21:13.751920   13289 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 16:21:13.756525   13289 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 16:21:13.763400   13289 start.go:297] selected driver: qemu2
	I1025 16:21:13.763406   13289 start.go:901] validating driver "qemu2" against <nil>
	I1025 16:21:13.763412   13289 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 16:21:13.765832   13289 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 16:21:13.768422   13289 out.go:177] * Automatically selected the socket_vmnet network
	I1025 16:21:13.771377   13289 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 16:21:13.771394   13289 cni.go:84] Creating CNI manager for ""
	I1025 16:21:13.771413   13289 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 16:21:13.771419   13289 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 16:21:13.771451   13289 start.go:340] cluster config:
	{Name:auto-864000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-864000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:21:13.775862   13289 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:21:13.784476   13289 out.go:177] * Starting "auto-864000" primary control-plane node in "auto-864000" cluster
	I1025 16:21:13.788422   13289 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1025 16:21:13.788444   13289 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1025 16:21:13.788453   13289 cache.go:56] Caching tarball of preloaded images
	I1025 16:21:13.788520   13289 preload.go:172] Found /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 16:21:13.788524   13289 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1025 16:21:13.788568   13289 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/auto-864000/config.json ...
	I1025 16:21:13.788578   13289 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/auto-864000/config.json: {Name:mke10acc3f4ef5b5847f4413fa5b028523f8a062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:21:13.788923   13289 start.go:360] acquireMachinesLock for auto-864000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:21:13.788965   13289 start.go:364] duration metric: took 36.583µs to acquireMachinesLock for "auto-864000"
	I1025 16:21:13.788978   13289 start.go:93] Provisioning new machine with config: &{Name:auto-864000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-864000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:21:13.789004   13289 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:21:13.796456   13289 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 16:21:13.811917   13289 start.go:159] libmachine.API.Create for "auto-864000" (driver="qemu2")
	I1025 16:21:13.811959   13289 client.go:168] LocalClient.Create starting
	I1025 16:21:13.812033   13289 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:21:13.812071   13289 main.go:141] libmachine: Decoding PEM data...
	I1025 16:21:13.812086   13289 main.go:141] libmachine: Parsing certificate...
	I1025 16:21:13.812125   13289 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:21:13.812156   13289 main.go:141] libmachine: Decoding PEM data...
	I1025 16:21:13.812164   13289 main.go:141] libmachine: Parsing certificate...
	I1025 16:21:13.812548   13289 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:21:13.974492   13289 main.go:141] libmachine: Creating SSH key...
	I1025 16:21:14.137701   13289 main.go:141] libmachine: Creating Disk image...
	I1025 16:21:14.137711   13289 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:21:14.137908   13289 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/auto-864000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/auto-864000/disk.qcow2
	I1025 16:21:14.147731   13289 main.go:141] libmachine: STDOUT: 
	I1025 16:21:14.147754   13289 main.go:141] libmachine: STDERR: 
	I1025 16:21:14.147823   13289 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/auto-864000/disk.qcow2 +20000M
	I1025 16:21:14.156554   13289 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:21:14.156574   13289 main.go:141] libmachine: STDERR: 
	I1025 16:21:14.156588   13289 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/auto-864000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/auto-864000/disk.qcow2
	I1025 16:21:14.156595   13289 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:21:14.156610   13289 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:21:14.156642   13289 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/auto-864000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/auto-864000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/auto-864000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:6d:ae:09:0b:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/auto-864000/disk.qcow2
	I1025 16:21:14.158790   13289 main.go:141] libmachine: STDOUT: 
	I1025 16:21:14.158808   13289 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:21:14.158831   13289 client.go:171] duration metric: took 346.866208ms to LocalClient.Create
	I1025 16:21:16.161041   13289 start.go:128] duration metric: took 2.372025583s to createHost
	I1025 16:21:16.161119   13289 start.go:83] releasing machines lock for "auto-864000", held for 2.372160417s
	W1025 16:21:16.161185   13289 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:21:16.171459   13289 out.go:177] * Deleting "auto-864000" in qemu2 ...
	W1025 16:21:16.193859   13289 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:21:16.193897   13289 start.go:729] Will try again in 5 seconds ...
	I1025 16:21:21.195970   13289 start.go:360] acquireMachinesLock for auto-864000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:21:21.196212   13289 start.go:364] duration metric: took 203.542µs to acquireMachinesLock for "auto-864000"
	I1025 16:21:21.196267   13289 start.go:93] Provisioning new machine with config: &{Name:auto-864000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-864000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:21:21.196331   13289 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:21:21.204630   13289 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 16:21:21.225534   13289 start.go:159] libmachine.API.Create for "auto-864000" (driver="qemu2")
	I1025 16:21:21.225575   13289 client.go:168] LocalClient.Create starting
	I1025 16:21:21.225680   13289 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:21:21.225733   13289 main.go:141] libmachine: Decoding PEM data...
	I1025 16:21:21.225749   13289 main.go:141] libmachine: Parsing certificate...
	I1025 16:21:21.225783   13289 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:21:21.225816   13289 main.go:141] libmachine: Decoding PEM data...
	I1025 16:21:21.225825   13289 main.go:141] libmachine: Parsing certificate...
	I1025 16:21:21.226262   13289 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:21:21.386734   13289 main.go:141] libmachine: Creating SSH key...
	I1025 16:21:21.485587   13289 main.go:141] libmachine: Creating Disk image...
	I1025 16:21:21.485597   13289 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:21:21.485801   13289 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/auto-864000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/auto-864000/disk.qcow2
	I1025 16:21:21.496038   13289 main.go:141] libmachine: STDOUT: 
	I1025 16:21:21.496058   13289 main.go:141] libmachine: STDERR: 
	I1025 16:21:21.496115   13289 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/auto-864000/disk.qcow2 +20000M
	I1025 16:21:21.504762   13289 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:21:21.504777   13289 main.go:141] libmachine: STDERR: 
	I1025 16:21:21.504789   13289 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/auto-864000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/auto-864000/disk.qcow2
	I1025 16:21:21.504795   13289 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:21:21.504806   13289 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:21:21.504852   13289 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/auto-864000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/auto-864000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/auto-864000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:f3:af:fc:4a:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/auto-864000/disk.qcow2
	I1025 16:21:21.506771   13289 main.go:141] libmachine: STDOUT: 
	I1025 16:21:21.506795   13289 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:21:21.506809   13289 client.go:171] duration metric: took 281.231125ms to LocalClient.Create
	I1025 16:21:23.508914   13289 start.go:128] duration metric: took 2.312584333s to createHost
	I1025 16:21:23.508957   13289 start.go:83] releasing machines lock for "auto-864000", held for 2.312742916s
	W1025 16:21:23.509196   13289 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-864000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-864000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:21:23.518546   13289 out.go:201] 
	W1025 16:21:23.524488   13289 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:21:23.524501   13289 out.go:270] * 
	* 
	W1025 16:21:23.525779   13289 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 16:21:23.535580   13289 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-864000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-864000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.827940834s)

                                                
                                                
-- stdout --
	* [kindnet-864000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-864000" primary control-plane node in "kindnet-864000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-864000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:21:25.978632   13400 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:21:25.978793   13400 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:21:25.978797   13400 out.go:358] Setting ErrFile to fd 2...
	I1025 16:21:25.978799   13400 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:21:25.978920   13400 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:21:25.980078   13400 out.go:352] Setting JSON to false
	I1025 16:21:25.998050   13400 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7723,"bootTime":1729890762,"procs":513,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 16:21:25.998115   13400 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 16:21:26.003134   13400 out.go:177] * [kindnet-864000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 16:21:26.010124   13400 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 16:21:26.010200   13400 notify.go:220] Checking for updates...
	I1025 16:21:26.017061   13400 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:21:26.020075   13400 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 16:21:26.023027   13400 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 16:21:26.026068   13400 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	I1025 16:21:26.029068   13400 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 16:21:26.032378   13400 config.go:182] Loaded profile config "multinode-747000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:21:26.032448   13400 config.go:182] Loaded profile config "stopped-upgrade-782000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 16:21:26.032495   13400 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 16:21:26.037021   13400 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 16:21:26.043027   13400 start.go:297] selected driver: qemu2
	I1025 16:21:26.043033   13400 start.go:901] validating driver "qemu2" against <nil>
	I1025 16:21:26.043040   13400 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 16:21:26.045551   13400 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 16:21:26.048079   13400 out.go:177] * Automatically selected the socket_vmnet network
	I1025 16:21:26.051159   13400 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 16:21:26.051189   13400 cni.go:84] Creating CNI manager for "kindnet"
	I1025 16:21:26.051194   13400 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 16:21:26.051236   13400 start.go:340] cluster config:
	{Name:kindnet-864000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-864000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:21:26.055815   13400 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:21:26.064074   13400 out.go:177] * Starting "kindnet-864000" primary control-plane node in "kindnet-864000" cluster
	I1025 16:21:26.068086   13400 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1025 16:21:26.068113   13400 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1025 16:21:26.068128   13400 cache.go:56] Caching tarball of preloaded images
	I1025 16:21:26.068204   13400 preload.go:172] Found /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 16:21:26.068209   13400 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1025 16:21:26.068269   13400 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/kindnet-864000/config.json ...
	I1025 16:21:26.068279   13400 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/kindnet-864000/config.json: {Name:mkf7790761e49ffdd0adef82d5a15964ace601e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:21:26.068570   13400 start.go:360] acquireMachinesLock for kindnet-864000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:21:26.068620   13400 start.go:364] duration metric: took 41.75µs to acquireMachinesLock for "kindnet-864000"
	I1025 16:21:26.068631   13400 start.go:93] Provisioning new machine with config: &{Name:kindnet-864000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-864000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:21:26.068677   13400 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:21:26.076042   13400 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 16:21:26.093410   13400 start.go:159] libmachine.API.Create for "kindnet-864000" (driver="qemu2")
	I1025 16:21:26.093435   13400 client.go:168] LocalClient.Create starting
	I1025 16:21:26.093507   13400 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:21:26.093549   13400 main.go:141] libmachine: Decoding PEM data...
	I1025 16:21:26.093566   13400 main.go:141] libmachine: Parsing certificate...
	I1025 16:21:26.093605   13400 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:21:26.093638   13400 main.go:141] libmachine: Decoding PEM data...
	I1025 16:21:26.093646   13400 main.go:141] libmachine: Parsing certificate...
	I1025 16:21:26.094019   13400 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:21:26.252815   13400 main.go:141] libmachine: Creating SSH key...
	I1025 16:21:26.364713   13400 main.go:141] libmachine: Creating Disk image...
	I1025 16:21:26.364721   13400 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:21:26.364929   13400 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kindnet-864000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kindnet-864000/disk.qcow2
	I1025 16:21:26.375129   13400 main.go:141] libmachine: STDOUT: 
	I1025 16:21:26.375147   13400 main.go:141] libmachine: STDERR: 
	I1025 16:21:26.375201   13400 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kindnet-864000/disk.qcow2 +20000M
	I1025 16:21:26.383989   13400 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:21:26.384005   13400 main.go:141] libmachine: STDERR: 
	I1025 16:21:26.384020   13400 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kindnet-864000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kindnet-864000/disk.qcow2
	I1025 16:21:26.384025   13400 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:21:26.384038   13400 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:21:26.384062   13400 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kindnet-864000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kindnet-864000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kindnet-864000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:7b:f6:b3:9b:a2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kindnet-864000/disk.qcow2
	I1025 16:21:26.385884   13400 main.go:141] libmachine: STDOUT: 
	I1025 16:21:26.385903   13400 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:21:26.385922   13400 client.go:171] duration metric: took 292.480542ms to LocalClient.Create
	I1025 16:21:28.388025   13400 start.go:128] duration metric: took 2.319352167s to createHost
	I1025 16:21:28.388092   13400 start.go:83] releasing machines lock for "kindnet-864000", held for 2.319482291s
	W1025 16:21:28.388120   13400 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:21:28.397932   13400 out.go:177] * Deleting "kindnet-864000" in qemu2 ...
	W1025 16:21:28.412491   13400 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:21:28.412503   13400 start.go:729] Will try again in 5 seconds ...
	I1025 16:21:33.413248   13400 start.go:360] acquireMachinesLock for kindnet-864000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:21:33.413431   13400 start.go:364] duration metric: took 144.958µs to acquireMachinesLock for "kindnet-864000"
	I1025 16:21:33.413463   13400 start.go:93] Provisioning new machine with config: &{Name:kindnet-864000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-864000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:21:33.413512   13400 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:21:33.425704   13400 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 16:21:33.442098   13400 start.go:159] libmachine.API.Create for "kindnet-864000" (driver="qemu2")
	I1025 16:21:33.442123   13400 client.go:168] LocalClient.Create starting
	I1025 16:21:33.442212   13400 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:21:33.442258   13400 main.go:141] libmachine: Decoding PEM data...
	I1025 16:21:33.442266   13400 main.go:141] libmachine: Parsing certificate...
	I1025 16:21:33.442302   13400 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:21:33.442335   13400 main.go:141] libmachine: Decoding PEM data...
	I1025 16:21:33.442342   13400 main.go:141] libmachine: Parsing certificate...
	I1025 16:21:33.442626   13400 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:21:33.602501   13400 main.go:141] libmachine: Creating SSH key...
	I1025 16:21:33.718212   13400 main.go:141] libmachine: Creating Disk image...
	I1025 16:21:33.718222   13400 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:21:33.718421   13400 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kindnet-864000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kindnet-864000/disk.qcow2
	I1025 16:21:33.728971   13400 main.go:141] libmachine: STDOUT: 
	I1025 16:21:33.728992   13400 main.go:141] libmachine: STDERR: 
	I1025 16:21:33.729048   13400 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kindnet-864000/disk.qcow2 +20000M
	I1025 16:21:33.738066   13400 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:21:33.738080   13400 main.go:141] libmachine: STDERR: 
	I1025 16:21:33.738089   13400 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kindnet-864000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kindnet-864000/disk.qcow2
	I1025 16:21:33.738093   13400 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:21:33.738101   13400 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:21:33.738129   13400 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kindnet-864000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kindnet-864000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kindnet-864000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:de:5b:b7:bb:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kindnet-864000/disk.qcow2
	I1025 16:21:33.740025   13400 main.go:141] libmachine: STDOUT: 
	I1025 16:21:33.740039   13400 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:21:33.740061   13400 client.go:171] duration metric: took 297.926625ms to LocalClient.Create
	I1025 16:21:35.742226   13400 start.go:128] duration metric: took 2.328713042s to createHost
	I1025 16:21:35.742274   13400 start.go:83] releasing machines lock for "kindnet-864000", held for 2.328853s
	W1025 16:21:35.742398   13400 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-864000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-864000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:21:35.751757   13400 out.go:201] 
	W1025 16:21:35.755765   13400 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:21:35.755775   13400 out.go:270] * 
	* 
	W1025 16:21:35.756618   13400 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 16:21:35.763791   13400 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-864000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-864000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.816322541s)

                                                
                                                
-- stdout --
	* [calico-864000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-864000" primary control-plane node in "calico-864000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-864000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:21:38.241081   13513 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:21:38.241568   13513 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:21:38.241603   13513 out.go:358] Setting ErrFile to fd 2...
	I1025 16:21:38.241616   13513 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:21:38.242206   13513 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:21:38.243684   13513 out.go:352] Setting JSON to false
	I1025 16:21:38.261889   13513 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7736,"bootTime":1729890762,"procs":512,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 16:21:38.261984   13513 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 16:21:38.268657   13513 out.go:177] * [calico-864000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 16:21:38.276676   13513 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 16:21:38.276739   13513 notify.go:220] Checking for updates...
	I1025 16:21:38.283675   13513 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:21:38.286521   13513 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 16:21:38.290616   13513 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 16:21:38.294741   13513 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	I1025 16:21:38.298511   13513 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 16:21:38.302889   13513 config.go:182] Loaded profile config "multinode-747000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:21:38.302973   13513 config.go:182] Loaded profile config "stopped-upgrade-782000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 16:21:38.303030   13513 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 16:21:38.306696   13513 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 16:21:38.314676   13513 start.go:297] selected driver: qemu2
	I1025 16:21:38.314683   13513 start.go:901] validating driver "qemu2" against <nil>
	I1025 16:21:38.314691   13513 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 16:21:38.317349   13513 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 16:21:38.321419   13513 out.go:177] * Automatically selected the socket_vmnet network
	I1025 16:21:38.325730   13513 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 16:21:38.325764   13513 cni.go:84] Creating CNI manager for "calico"
	I1025 16:21:38.325769   13513 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I1025 16:21:38.325809   13513 start.go:340] cluster config:
	{Name:calico-864000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-864000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:21:38.330440   13513 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:21:38.338575   13513 out.go:177] * Starting "calico-864000" primary control-plane node in "calico-864000" cluster
	I1025 16:21:38.342573   13513 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1025 16:21:38.342595   13513 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1025 16:21:38.342606   13513 cache.go:56] Caching tarball of preloaded images
	I1025 16:21:38.342687   13513 preload.go:172] Found /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 16:21:38.342693   13513 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1025 16:21:38.342776   13513 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/calico-864000/config.json ...
	I1025 16:21:38.342787   13513 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/calico-864000/config.json: {Name:mkb95ab3a8f8879d98d3d9236d8dfbf7ce4da663 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:21:38.343087   13513 start.go:360] acquireMachinesLock for calico-864000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:21:38.343143   13513 start.go:364] duration metric: took 48.75µs to acquireMachinesLock for "calico-864000"
	I1025 16:21:38.343157   13513 start.go:93] Provisioning new machine with config: &{Name:calico-864000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-864000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:21:38.343184   13513 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:21:38.351639   13513 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 16:21:38.366215   13513 start.go:159] libmachine.API.Create for "calico-864000" (driver="qemu2")
	I1025 16:21:38.366242   13513 client.go:168] LocalClient.Create starting
	I1025 16:21:38.366306   13513 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:21:38.366349   13513 main.go:141] libmachine: Decoding PEM data...
	I1025 16:21:38.366360   13513 main.go:141] libmachine: Parsing certificate...
	I1025 16:21:38.366398   13513 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:21:38.366427   13513 main.go:141] libmachine: Decoding PEM data...
	I1025 16:21:38.366435   13513 main.go:141] libmachine: Parsing certificate...
	I1025 16:21:38.366819   13513 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:21:38.527519   13513 main.go:141] libmachine: Creating SSH key...
	I1025 16:21:38.612852   13513 main.go:141] libmachine: Creating Disk image...
	I1025 16:21:38.612858   13513 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:21:38.613062   13513 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/calico-864000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/calico-864000/disk.qcow2
	I1025 16:21:38.623169   13513 main.go:141] libmachine: STDOUT: 
	I1025 16:21:38.623191   13513 main.go:141] libmachine: STDERR: 
	I1025 16:21:38.623242   13513 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/calico-864000/disk.qcow2 +20000M
	I1025 16:21:38.631872   13513 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:21:38.631886   13513 main.go:141] libmachine: STDERR: 
	I1025 16:21:38.631908   13513 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/calico-864000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/calico-864000/disk.qcow2
	I1025 16:21:38.631913   13513 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:21:38.631925   13513 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:21:38.631952   13513 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/calico-864000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/calico-864000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/calico-864000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:34:5d:da:6f:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/calico-864000/disk.qcow2
	I1025 16:21:38.633633   13513 main.go:141] libmachine: STDOUT: 
	I1025 16:21:38.633648   13513 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:21:38.633670   13513 client.go:171] duration metric: took 267.424209ms to LocalClient.Create
	I1025 16:21:40.635869   13513 start.go:128] duration metric: took 2.292667166s to createHost
	I1025 16:21:40.635945   13513 start.go:83] releasing machines lock for "calico-864000", held for 2.292807292s
	W1025 16:21:40.636003   13513 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:21:40.650574   13513 out.go:177] * Deleting "calico-864000" in qemu2 ...
	W1025 16:21:40.676054   13513 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:21:40.676086   13513 start.go:729] Will try again in 5 seconds ...
	I1025 16:21:45.678170   13513 start.go:360] acquireMachinesLock for calico-864000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:21:45.678433   13513 start.go:364] duration metric: took 225.125µs to acquireMachinesLock for "calico-864000"
	I1025 16:21:45.678491   13513 start.go:93] Provisioning new machine with config: &{Name:calico-864000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-864000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:21:45.678617   13513 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:21:45.683992   13513 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 16:21:45.715959   13513 start.go:159] libmachine.API.Create for "calico-864000" (driver="qemu2")
	I1025 16:21:45.716005   13513 client.go:168] LocalClient.Create starting
	I1025 16:21:45.716138   13513 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:21:45.716206   13513 main.go:141] libmachine: Decoding PEM data...
	I1025 16:21:45.716220   13513 main.go:141] libmachine: Parsing certificate...
	I1025 16:21:45.716279   13513 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:21:45.716328   13513 main.go:141] libmachine: Decoding PEM data...
	I1025 16:21:45.716336   13513 main.go:141] libmachine: Parsing certificate...
	I1025 16:21:45.716909   13513 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:21:45.885626   13513 main.go:141] libmachine: Creating SSH key...
	I1025 16:21:45.958210   13513 main.go:141] libmachine: Creating Disk image...
	I1025 16:21:45.958219   13513 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:21:45.958418   13513 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/calico-864000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/calico-864000/disk.qcow2
	I1025 16:21:45.968210   13513 main.go:141] libmachine: STDOUT: 
	I1025 16:21:45.968232   13513 main.go:141] libmachine: STDERR: 
	I1025 16:21:45.968287   13513 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/calico-864000/disk.qcow2 +20000M
	I1025 16:21:45.976799   13513 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:21:45.976815   13513 main.go:141] libmachine: STDERR: 
	I1025 16:21:45.976840   13513 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/calico-864000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/calico-864000/disk.qcow2
	I1025 16:21:45.976846   13513 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:21:45.976861   13513 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:21:45.976886   13513 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/calico-864000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/calico-864000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/calico-864000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:46:20:ce:cb:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/calico-864000/disk.qcow2
	I1025 16:21:45.978709   13513 main.go:141] libmachine: STDOUT: 
	I1025 16:21:45.978726   13513 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:21:45.978746   13513 client.go:171] duration metric: took 262.738375ms to LocalClient.Create
	I1025 16:21:47.980965   13513 start.go:128] duration metric: took 2.302328292s to createHost
	I1025 16:21:47.981045   13513 start.go:83] releasing machines lock for "calico-864000", held for 2.302612417s
	W1025 16:21:47.981400   13513 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-864000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-864000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:21:47.993022   13513 out.go:201] 
	W1025 16:21:47.996977   13513 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:21:47.997007   13513 out.go:270] * 
	* 
	W1025 16:21:47.999671   13513 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 16:21:48.011000   13513 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-864000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-864000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.82948375s)

                                                
                                                
-- stdout --
	* [custom-flannel-864000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-864000" primary control-plane node in "custom-flannel-864000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-864000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:21:50.616562   13630 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:21:50.616725   13630 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:21:50.616729   13630 out.go:358] Setting ErrFile to fd 2...
	I1025 16:21:50.616731   13630 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:21:50.616872   13630 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:21:50.618027   13630 out.go:352] Setting JSON to false
	I1025 16:21:50.636149   13630 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7748,"bootTime":1729890762,"procs":512,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 16:21:50.636233   13630 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 16:21:50.641783   13630 out.go:177] * [custom-flannel-864000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 16:21:50.649506   13630 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 16:21:50.649589   13630 notify.go:220] Checking for updates...
	I1025 16:21:50.656672   13630 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:21:50.657970   13630 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 16:21:50.660657   13630 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 16:21:50.663737   13630 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	I1025 16:21:50.666690   13630 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 16:21:50.669960   13630 config.go:182] Loaded profile config "multinode-747000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:21:50.670038   13630 config.go:182] Loaded profile config "stopped-upgrade-782000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 16:21:50.670083   13630 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 16:21:50.674648   13630 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 16:21:50.681668   13630 start.go:297] selected driver: qemu2
	I1025 16:21:50.681677   13630 start.go:901] validating driver "qemu2" against <nil>
	I1025 16:21:50.681685   13630 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 16:21:50.684249   13630 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 16:21:50.687647   13630 out.go:177] * Automatically selected the socket_vmnet network
	I1025 16:21:50.690795   13630 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 16:21:50.690816   13630 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1025 16:21:50.690831   13630 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1025 16:21:50.690862   13630 start.go:340] cluster config:
	{Name:custom-flannel-864000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-864000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:21:50.695644   13630 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:21:50.703681   13630 out.go:177] * Starting "custom-flannel-864000" primary control-plane node in "custom-flannel-864000" cluster
	I1025 16:21:50.707558   13630 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1025 16:21:50.707576   13630 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1025 16:21:50.707586   13630 cache.go:56] Caching tarball of preloaded images
	I1025 16:21:50.707662   13630 preload.go:172] Found /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 16:21:50.707675   13630 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1025 16:21:50.707726   13630 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/custom-flannel-864000/config.json ...
	I1025 16:21:50.707742   13630 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/custom-flannel-864000/config.json: {Name:mk74880483ae7d7225f35542ad94ea4ce9d39eee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:21:50.708079   13630 start.go:360] acquireMachinesLock for custom-flannel-864000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:21:50.708125   13630 start.go:364] duration metric: took 37.25µs to acquireMachinesLock for "custom-flannel-864000"
	I1025 16:21:50.708136   13630 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-864000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-864000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:21:50.708166   13630 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:21:50.716645   13630 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 16:21:50.731517   13630 start.go:159] libmachine.API.Create for "custom-flannel-864000" (driver="qemu2")
	I1025 16:21:50.731543   13630 client.go:168] LocalClient.Create starting
	I1025 16:21:50.731611   13630 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:21:50.731656   13630 main.go:141] libmachine: Decoding PEM data...
	I1025 16:21:50.731665   13630 main.go:141] libmachine: Parsing certificate...
	I1025 16:21:50.731699   13630 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:21:50.731731   13630 main.go:141] libmachine: Decoding PEM data...
	I1025 16:21:50.731737   13630 main.go:141] libmachine: Parsing certificate...
	I1025 16:21:50.732169   13630 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:21:50.893764   13630 main.go:141] libmachine: Creating SSH key...
	I1025 16:21:50.969550   13630 main.go:141] libmachine: Creating Disk image...
	I1025 16:21:50.969562   13630 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:21:50.969804   13630 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/custom-flannel-864000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/custom-flannel-864000/disk.qcow2
	I1025 16:21:50.980647   13630 main.go:141] libmachine: STDOUT: 
	I1025 16:21:50.980686   13630 main.go:141] libmachine: STDERR: 
	I1025 16:21:50.980753   13630 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/custom-flannel-864000/disk.qcow2 +20000M
	I1025 16:21:50.989867   13630 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:21:50.989892   13630 main.go:141] libmachine: STDERR: 
	I1025 16:21:50.989911   13630 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/custom-flannel-864000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/custom-flannel-864000/disk.qcow2
	I1025 16:21:50.989917   13630 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:21:50.989930   13630 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:21:50.989968   13630 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/custom-flannel-864000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/custom-flannel-864000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/custom-flannel-864000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:19:8b:e9:34:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/custom-flannel-864000/disk.qcow2
	I1025 16:21:50.991838   13630 main.go:141] libmachine: STDOUT: 
	I1025 16:21:50.991867   13630 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:21:50.991887   13630 client.go:171] duration metric: took 260.34ms to LocalClient.Create
	I1025 16:21:52.994089   13630 start.go:128] duration metric: took 2.28590675s to createHost
	I1025 16:21:52.994166   13630 start.go:83] releasing machines lock for "custom-flannel-864000", held for 2.286046s
	W1025 16:21:52.994280   13630 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:21:53.008574   13630 out.go:177] * Deleting "custom-flannel-864000" in qemu2 ...
	W1025 16:21:53.033676   13630 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:21:53.033707   13630 start.go:729] Will try again in 5 seconds ...
	I1025 16:21:58.035990   13630 start.go:360] acquireMachinesLock for custom-flannel-864000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:21:58.036713   13630 start.go:364] duration metric: took 599.5µs to acquireMachinesLock for "custom-flannel-864000"
	I1025 16:21:58.036852   13630 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-864000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-864000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:21:58.037146   13630 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:21:58.046791   13630 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 16:21:58.095476   13630 start.go:159] libmachine.API.Create for "custom-flannel-864000" (driver="qemu2")
	I1025 16:21:58.095531   13630 client.go:168] LocalClient.Create starting
	I1025 16:21:58.095664   13630 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:21:58.095757   13630 main.go:141] libmachine: Decoding PEM data...
	I1025 16:21:58.095779   13630 main.go:141] libmachine: Parsing certificate...
	I1025 16:21:58.095846   13630 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:21:58.095903   13630 main.go:141] libmachine: Decoding PEM data...
	I1025 16:21:58.095915   13630 main.go:141] libmachine: Parsing certificate...
	I1025 16:21:58.096583   13630 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:21:58.269162   13630 main.go:141] libmachine: Creating SSH key...
	I1025 16:21:58.349666   13630 main.go:141] libmachine: Creating Disk image...
	I1025 16:21:58.349674   13630 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:21:58.349888   13630 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/custom-flannel-864000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/custom-flannel-864000/disk.qcow2
	I1025 16:21:58.359812   13630 main.go:141] libmachine: STDOUT: 
	I1025 16:21:58.359847   13630 main.go:141] libmachine: STDERR: 
	I1025 16:21:58.359902   13630 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/custom-flannel-864000/disk.qcow2 +20000M
	I1025 16:21:58.368518   13630 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:21:58.368538   13630 main.go:141] libmachine: STDERR: 
	I1025 16:21:58.368552   13630 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/custom-flannel-864000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/custom-flannel-864000/disk.qcow2
	I1025 16:21:58.368558   13630 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:21:58.368568   13630 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:21:58.368594   13630 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/custom-flannel-864000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/custom-flannel-864000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/custom-flannel-864000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:37:10:c2:fb:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/custom-flannel-864000/disk.qcow2
	I1025 16:21:58.370448   13630 main.go:141] libmachine: STDOUT: 
	I1025 16:21:58.370463   13630 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:21:58.370477   13630 client.go:171] duration metric: took 274.94275ms to LocalClient.Create
	I1025 16:22:00.372673   13630 start.go:128] duration metric: took 2.335506958s to createHost
	I1025 16:22:00.372741   13630 start.go:83] releasing machines lock for "custom-flannel-864000", held for 2.336019625s
	W1025 16:22:00.373132   13630 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-864000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-864000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:22:00.382759   13630 out.go:201] 
	W1025 16:22:00.387907   13630 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:22:00.387985   13630 out.go:270] * 
	* 
	W1025 16:22:00.390541   13630 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 16:22:00.399847   13630 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-864000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-864000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.762839833s)

                                                
                                                
-- stdout --
	* [false-864000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-864000" primary control-plane node in "false-864000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-864000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:22:02.970263   13747 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:22:02.970411   13747 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:22:02.970417   13747 out.go:358] Setting ErrFile to fd 2...
	I1025 16:22:02.970419   13747 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:22:02.970550   13747 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:22:02.971958   13747 out.go:352] Setting JSON to false
	I1025 16:22:02.990089   13747 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7760,"bootTime":1729890762,"procs":512,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 16:22:02.990187   13747 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 16:22:02.995552   13747 out.go:177] * [false-864000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 16:22:03.003532   13747 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 16:22:03.003595   13747 notify.go:220] Checking for updates...
	I1025 16:22:03.010541   13747 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:22:03.013546   13747 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 16:22:03.016573   13747 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 16:22:03.019479   13747 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	I1025 16:22:03.022537   13747 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 16:22:03.025840   13747 config.go:182] Loaded profile config "multinode-747000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:22:03.025909   13747 config.go:182] Loaded profile config "stopped-upgrade-782000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 16:22:03.025959   13747 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 16:22:03.033540   13747 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 16:22:03.040445   13747 start.go:297] selected driver: qemu2
	I1025 16:22:03.040452   13747 start.go:901] validating driver "qemu2" against <nil>
	I1025 16:22:03.040459   13747 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 16:22:03.043030   13747 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 16:22:03.046489   13747 out.go:177] * Automatically selected the socket_vmnet network
	I1025 16:22:03.049651   13747 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 16:22:03.049685   13747 cni.go:84] Creating CNI manager for "false"
	I1025 16:22:03.049712   13747 start.go:340] cluster config:
	{Name:false-864000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:false-864000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:22:03.054043   13747 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:22:03.062544   13747 out.go:177] * Starting "false-864000" primary control-plane node in "false-864000" cluster
	I1025 16:22:03.065433   13747 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1025 16:22:03.065447   13747 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1025 16:22:03.065459   13747 cache.go:56] Caching tarball of preloaded images
	I1025 16:22:03.065535   13747 preload.go:172] Found /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 16:22:03.065540   13747 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1025 16:22:03.065599   13747 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/false-864000/config.json ...
	I1025 16:22:03.065610   13747 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/false-864000/config.json: {Name:mk0f8bf44104c79c01d52b34cb1ea330c20d62be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:22:03.065949   13747 start.go:360] acquireMachinesLock for false-864000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:22:03.065993   13747 start.go:364] duration metric: took 38.833µs to acquireMachinesLock for "false-864000"
	I1025 16:22:03.066003   13747 start.go:93] Provisioning new machine with config: &{Name:false-864000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-864000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:22:03.066024   13747 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:22:03.069612   13747 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 16:22:03.084166   13747 start.go:159] libmachine.API.Create for "false-864000" (driver="qemu2")
	I1025 16:22:03.084192   13747 client.go:168] LocalClient.Create starting
	I1025 16:22:03.084263   13747 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:22:03.084302   13747 main.go:141] libmachine: Decoding PEM data...
	I1025 16:22:03.084315   13747 main.go:141] libmachine: Parsing certificate...
	I1025 16:22:03.084361   13747 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:22:03.084390   13747 main.go:141] libmachine: Decoding PEM data...
	I1025 16:22:03.084398   13747 main.go:141] libmachine: Parsing certificate...
	I1025 16:22:03.084801   13747 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:22:03.244958   13747 main.go:141] libmachine: Creating SSH key...
	I1025 16:22:03.305044   13747 main.go:141] libmachine: Creating Disk image...
	I1025 16:22:03.305050   13747 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:22:03.305450   13747 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/false-864000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/false-864000/disk.qcow2
	I1025 16:22:03.315603   13747 main.go:141] libmachine: STDOUT: 
	I1025 16:22:03.315621   13747 main.go:141] libmachine: STDERR: 
	I1025 16:22:03.315698   13747 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/false-864000/disk.qcow2 +20000M
	I1025 16:22:03.324583   13747 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:22:03.324608   13747 main.go:141] libmachine: STDERR: 
	I1025 16:22:03.324624   13747 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/false-864000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/false-864000/disk.qcow2
	I1025 16:22:03.324631   13747 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:22:03.324643   13747 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:22:03.324681   13747 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/false-864000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/false-864000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/false-864000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:5f:38:28:09:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/false-864000/disk.qcow2
	I1025 16:22:03.326549   13747 main.go:141] libmachine: STDOUT: 
	I1025 16:22:03.326563   13747 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:22:03.326581   13747 client.go:171] duration metric: took 242.382958ms to LocalClient.Create
	I1025 16:22:05.328841   13747 start.go:128] duration metric: took 2.262799125s to createHost
	I1025 16:22:05.328910   13747 start.go:83] releasing machines lock for "false-864000", held for 2.262923458s
	W1025 16:22:05.328997   13747 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:22:05.340237   13747 out.go:177] * Deleting "false-864000" in qemu2 ...
	W1025 16:22:05.363510   13747 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:22:05.363534   13747 start.go:729] Will try again in 5 seconds ...
	I1025 16:22:10.365762   13747 start.go:360] acquireMachinesLock for false-864000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:22:10.366445   13747 start.go:364] duration metric: took 561.375µs to acquireMachinesLock for "false-864000"
	I1025 16:22:10.366576   13747 start.go:93] Provisioning new machine with config: &{Name:false-864000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-864000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:22:10.366833   13747 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:22:10.377510   13747 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 16:22:10.426255   13747 start.go:159] libmachine.API.Create for "false-864000" (driver="qemu2")
	I1025 16:22:10.426320   13747 client.go:168] LocalClient.Create starting
	I1025 16:22:10.426509   13747 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:22:10.426601   13747 main.go:141] libmachine: Decoding PEM data...
	I1025 16:22:10.426622   13747 main.go:141] libmachine: Parsing certificate...
	I1025 16:22:10.426704   13747 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:22:10.426762   13747 main.go:141] libmachine: Decoding PEM data...
	I1025 16:22:10.426775   13747 main.go:141] libmachine: Parsing certificate...
	I1025 16:22:10.427350   13747 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:22:10.598053   13747 main.go:141] libmachine: Creating SSH key...
	I1025 16:22:10.630238   13747 main.go:141] libmachine: Creating Disk image...
	I1025 16:22:10.630245   13747 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:22:10.630454   13747 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/false-864000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/false-864000/disk.qcow2
	I1025 16:22:10.640304   13747 main.go:141] libmachine: STDOUT: 
	I1025 16:22:10.640329   13747 main.go:141] libmachine: STDERR: 
	I1025 16:22:10.640385   13747 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/false-864000/disk.qcow2 +20000M
	I1025 16:22:10.649512   13747 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:22:10.649530   13747 main.go:141] libmachine: STDERR: 
	I1025 16:22:10.649556   13747 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/false-864000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/false-864000/disk.qcow2
	I1025 16:22:10.649561   13747 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:22:10.649572   13747 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:22:10.649603   13747 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/false-864000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/false-864000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/false-864000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:b4:77:8b:bc:1c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/false-864000/disk.qcow2
	I1025 16:22:10.651732   13747 main.go:141] libmachine: STDOUT: 
	I1025 16:22:10.651749   13747 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:22:10.651762   13747 client.go:171] duration metric: took 225.438667ms to LocalClient.Create
	I1025 16:22:12.653944   13747 start.go:128] duration metric: took 2.287074583s to createHost
	I1025 16:22:12.654014   13747 start.go:83] releasing machines lock for "false-864000", held for 2.2875585s
	W1025 16:22:12.654438   13747 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-864000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-864000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:22:12.668123   13747 out.go:201] 
	W1025 16:22:12.672313   13747 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:22:12.672354   13747 out.go:270] * 
	* 
	W1025 16:22:12.675011   13747 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 16:22:12.686018   13747 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-864000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-864000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.792263417s)

                                                
                                                
-- stdout --
	* [enable-default-cni-864000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-864000" primary control-plane node in "enable-default-cni-864000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-864000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:22:15.067477   13856 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:22:15.067634   13856 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:22:15.067637   13856 out.go:358] Setting ErrFile to fd 2...
	I1025 16:22:15.067640   13856 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:22:15.067767   13856 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:22:15.068873   13856 out.go:352] Setting JSON to false
	I1025 16:22:15.086639   13856 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7773,"bootTime":1729890762,"procs":512,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 16:22:15.086710   13856 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 16:22:15.092165   13856 out.go:177] * [enable-default-cni-864000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 16:22:15.100248   13856 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 16:22:15.100346   13856 notify.go:220] Checking for updates...
	I1025 16:22:15.107192   13856 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:22:15.110186   13856 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 16:22:15.114231   13856 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 16:22:15.117110   13856 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	I1025 16:22:15.120206   13856 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 16:22:15.123504   13856 config.go:182] Loaded profile config "multinode-747000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:22:15.123583   13856 config.go:182] Loaded profile config "stopped-upgrade-782000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 16:22:15.123623   13856 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 16:22:15.127055   13856 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 16:22:15.134144   13856 start.go:297] selected driver: qemu2
	I1025 16:22:15.134149   13856 start.go:901] validating driver "qemu2" against <nil>
	I1025 16:22:15.134155   13856 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 16:22:15.136687   13856 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 16:22:15.137920   13856 out.go:177] * Automatically selected the socket_vmnet network
	E1025 16:22:15.141289   13856 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1025 16:22:15.141302   13856 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 16:22:15.141325   13856 cni.go:84] Creating CNI manager for "bridge"
	I1025 16:22:15.141331   13856 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 16:22:15.141370   13856 start.go:340] cluster config:
	{Name:enable-default-cni-864000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-864000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:22:15.145857   13856 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:22:15.154171   13856 out.go:177] * Starting "enable-default-cni-864000" primary control-plane node in "enable-default-cni-864000" cluster
	I1025 16:22:15.158151   13856 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1025 16:22:15.158168   13856 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1025 16:22:15.158177   13856 cache.go:56] Caching tarball of preloaded images
	I1025 16:22:15.158258   13856 preload.go:172] Found /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 16:22:15.158264   13856 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1025 16:22:15.158317   13856 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/enable-default-cni-864000/config.json ...
	I1025 16:22:15.158328   13856 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/enable-default-cni-864000/config.json: {Name:mk2b5cac530c72a813a846bb043a10691e0d6bea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:22:15.158644   13856 start.go:360] acquireMachinesLock for enable-default-cni-864000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:22:15.158706   13856 start.go:364] duration metric: took 51.167µs to acquireMachinesLock for "enable-default-cni-864000"
	I1025 16:22:15.158720   13856 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-864000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-864000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:22:15.158761   13856 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:22:15.166137   13856 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 16:22:15.181515   13856 start.go:159] libmachine.API.Create for "enable-default-cni-864000" (driver="qemu2")
	I1025 16:22:15.181541   13856 client.go:168] LocalClient.Create starting
	I1025 16:22:15.181615   13856 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:22:15.181654   13856 main.go:141] libmachine: Decoding PEM data...
	I1025 16:22:15.181664   13856 main.go:141] libmachine: Parsing certificate...
	I1025 16:22:15.181699   13856 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:22:15.181730   13856 main.go:141] libmachine: Decoding PEM data...
	I1025 16:22:15.181737   13856 main.go:141] libmachine: Parsing certificate...
	I1025 16:22:15.182098   13856 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:22:15.347165   13856 main.go:141] libmachine: Creating SSH key...
	I1025 16:22:15.397769   13856 main.go:141] libmachine: Creating Disk image...
	I1025 16:22:15.397776   13856 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:22:15.398187   13856 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/enable-default-cni-864000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/enable-default-cni-864000/disk.qcow2
	I1025 16:22:15.408153   13856 main.go:141] libmachine: STDOUT: 
	I1025 16:22:15.408175   13856 main.go:141] libmachine: STDERR: 
	I1025 16:22:15.408228   13856 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/enable-default-cni-864000/disk.qcow2 +20000M
	I1025 16:22:15.416794   13856 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:22:15.416809   13856 main.go:141] libmachine: STDERR: 
	I1025 16:22:15.416831   13856 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/enable-default-cni-864000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/enable-default-cni-864000/disk.qcow2
	I1025 16:22:15.416836   13856 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:22:15.416847   13856 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:22:15.416881   13856 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/enable-default-cni-864000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/enable-default-cni-864000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/enable-default-cni-864000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:70:f8:2b:67:7b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/enable-default-cni-864000/disk.qcow2
	I1025 16:22:15.418720   13856 main.go:141] libmachine: STDOUT: 
	I1025 16:22:15.418733   13856 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:22:15.418750   13856 client.go:171] duration metric: took 237.205917ms to LocalClient.Create
	I1025 16:22:17.421051   13856 start.go:128] duration metric: took 2.262282333s to createHost
	I1025 16:22:17.421143   13856 start.go:83] releasing machines lock for "enable-default-cni-864000", held for 2.262419083s
	W1025 16:22:17.421208   13856 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:22:17.432382   13856 out.go:177] * Deleting "enable-default-cni-864000" in qemu2 ...
	W1025 16:22:17.460675   13856 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:22:17.460710   13856 start.go:729] Will try again in 5 seconds ...
	I1025 16:22:22.462732   13856 start.go:360] acquireMachinesLock for enable-default-cni-864000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:22:22.462861   13856 start.go:364] duration metric: took 114µs to acquireMachinesLock for "enable-default-cni-864000"
	I1025 16:22:22.462878   13856 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-864000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-864000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:22:22.462931   13856 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:22:22.472162   13856 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 16:22:22.488274   13856 start.go:159] libmachine.API.Create for "enable-default-cni-864000" (driver="qemu2")
	I1025 16:22:22.488304   13856 client.go:168] LocalClient.Create starting
	I1025 16:22:22.488394   13856 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:22:22.488445   13856 main.go:141] libmachine: Decoding PEM data...
	I1025 16:22:22.488455   13856 main.go:141] libmachine: Parsing certificate...
	I1025 16:22:22.488494   13856 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:22:22.488530   13856 main.go:141] libmachine: Decoding PEM data...
	I1025 16:22:22.488539   13856 main.go:141] libmachine: Parsing certificate...
	I1025 16:22:22.488942   13856 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:22:22.649722   13856 main.go:141] libmachine: Creating SSH key...
	I1025 16:22:22.764939   13856 main.go:141] libmachine: Creating Disk image...
	I1025 16:22:22.764947   13856 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:22:22.765176   13856 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/enable-default-cni-864000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/enable-default-cni-864000/disk.qcow2
	I1025 16:22:22.775884   13856 main.go:141] libmachine: STDOUT: 
	I1025 16:22:22.775919   13856 main.go:141] libmachine: STDERR: 
	I1025 16:22:22.775973   13856 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/enable-default-cni-864000/disk.qcow2 +20000M
	I1025 16:22:22.784609   13856 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:22:22.784625   13856 main.go:141] libmachine: STDERR: 
	I1025 16:22:22.784636   13856 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/enable-default-cni-864000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/enable-default-cni-864000/disk.qcow2
	I1025 16:22:22.784643   13856 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:22:22.784652   13856 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:22:22.784695   13856 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/enable-default-cni-864000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/enable-default-cni-864000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/enable-default-cni-864000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:10:54:cc:98:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/enable-default-cni-864000/disk.qcow2
	I1025 16:22:22.786512   13856 main.go:141] libmachine: STDOUT: 
	I1025 16:22:22.786527   13856 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:22:22.786539   13856 client.go:171] duration metric: took 298.231958ms to LocalClient.Create
	I1025 16:22:24.788713   13856 start.go:128] duration metric: took 2.325764541s to createHost
	I1025 16:22:24.788778   13856 start.go:83] releasing machines lock for "enable-default-cni-864000", held for 2.325921208s
	W1025 16:22:24.789126   13856 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-864000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-864000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:22:24.798671   13856 out.go:201] 
	W1025 16:22:24.803802   13856 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:22:24.803838   13856 out.go:270] * 
	* 
	W1025 16:22:24.805475   13856 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 16:22:24.813641   13856 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-864000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-864000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.979241625s)

                                                
                                                
-- stdout --
	* [flannel-864000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-864000" primary control-plane node in "flannel-864000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-864000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:22:27.189917   13965 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:22:27.190072   13965 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:22:27.190075   13965 out.go:358] Setting ErrFile to fd 2...
	I1025 16:22:27.190077   13965 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:22:27.190206   13965 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:22:27.191373   13965 out.go:352] Setting JSON to false
	I1025 16:22:27.209231   13965 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7785,"bootTime":1729890762,"procs":512,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 16:22:27.209303   13965 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 16:22:27.216124   13965 out.go:177] * [flannel-864000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 16:22:27.224202   13965 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 16:22:27.224289   13965 notify.go:220] Checking for updates...
	I1025 16:22:27.231107   13965 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:22:27.234148   13965 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 16:22:27.237107   13965 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 16:22:27.240181   13965 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	I1025 16:22:27.243181   13965 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 16:22:27.246407   13965 config.go:182] Loaded profile config "multinode-747000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:22:27.246480   13965 config.go:182] Loaded profile config "stopped-upgrade-782000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 16:22:27.246535   13965 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 16:22:27.251092   13965 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 16:22:27.258151   13965 start.go:297] selected driver: qemu2
	I1025 16:22:27.258157   13965 start.go:901] validating driver "qemu2" against <nil>
	I1025 16:22:27.258166   13965 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 16:22:27.260527   13965 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 16:22:27.263100   13965 out.go:177] * Automatically selected the socket_vmnet network
	I1025 16:22:27.266283   13965 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 16:22:27.266301   13965 cni.go:84] Creating CNI manager for "flannel"
	I1025 16:22:27.266304   13965 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1025 16:22:27.266344   13965 start.go:340] cluster config:
	{Name:flannel-864000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:flannel-864000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:22:27.270780   13965 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:22:27.279099   13965 out.go:177] * Starting "flannel-864000" primary control-plane node in "flannel-864000" cluster
	I1025 16:22:27.282913   13965 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1025 16:22:27.282925   13965 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1025 16:22:27.282932   13965 cache.go:56] Caching tarball of preloaded images
	I1025 16:22:27.282996   13965 preload.go:172] Found /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 16:22:27.283001   13965 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1025 16:22:27.283053   13965 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/flannel-864000/config.json ...
	I1025 16:22:27.283063   13965 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/flannel-864000/config.json: {Name:mk0b31ab5400c23ec668635aba716681fb0da3b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:22:27.283307   13965 start.go:360] acquireMachinesLock for flannel-864000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:22:27.283349   13965 start.go:364] duration metric: took 36.916µs to acquireMachinesLock for "flannel-864000"
	I1025 16:22:27.283360   13965 start.go:93] Provisioning new machine with config: &{Name:flannel-864000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-864000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:22:27.283382   13965 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:22:27.291059   13965 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 16:22:27.305680   13965 start.go:159] libmachine.API.Create for "flannel-864000" (driver="qemu2")
	I1025 16:22:27.305710   13965 client.go:168] LocalClient.Create starting
	I1025 16:22:27.305793   13965 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:22:27.305829   13965 main.go:141] libmachine: Decoding PEM data...
	I1025 16:22:27.305839   13965 main.go:141] libmachine: Parsing certificate...
	I1025 16:22:27.305875   13965 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:22:27.305907   13965 main.go:141] libmachine: Decoding PEM data...
	I1025 16:22:27.305916   13965 main.go:141] libmachine: Parsing certificate...
	I1025 16:22:27.306266   13965 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:22:27.464558   13965 main.go:141] libmachine: Creating SSH key...
	I1025 16:22:27.550427   13965 main.go:141] libmachine: Creating Disk image...
	I1025 16:22:27.550438   13965 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:22:27.550632   13965 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/flannel-864000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/flannel-864000/disk.qcow2
	I1025 16:22:27.560601   13965 main.go:141] libmachine: STDOUT: 
	I1025 16:22:27.560627   13965 main.go:141] libmachine: STDERR: 
	I1025 16:22:27.560700   13965 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/flannel-864000/disk.qcow2 +20000M
	I1025 16:22:27.569386   13965 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:22:27.569401   13965 main.go:141] libmachine: STDERR: 
	I1025 16:22:27.569419   13965 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/flannel-864000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/flannel-864000/disk.qcow2
	I1025 16:22:27.569425   13965 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:22:27.569436   13965 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:22:27.569463   13965 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/flannel-864000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/flannel-864000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/flannel-864000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:f6:b7:14:71:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/flannel-864000/disk.qcow2
	I1025 16:22:27.571247   13965 main.go:141] libmachine: STDOUT: 
	I1025 16:22:27.571261   13965 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:22:27.571282   13965 client.go:171] duration metric: took 265.568792ms to LocalClient.Create
	I1025 16:22:29.573574   13965 start.go:128] duration metric: took 2.290175666s to createHost
	I1025 16:22:29.573675   13965 start.go:83] releasing machines lock for "flannel-864000", held for 2.290331917s
	W1025 16:22:29.573774   13965 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:22:29.585103   13965 out.go:177] * Deleting "flannel-864000" in qemu2 ...
	W1025 16:22:29.613088   13965 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:22:29.613116   13965 start.go:729] Will try again in 5 seconds ...
	I1025 16:22:34.615310   13965 start.go:360] acquireMachinesLock for flannel-864000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:22:34.615953   13965 start.go:364] duration metric: took 539.875µs to acquireMachinesLock for "flannel-864000"
	I1025 16:22:34.616066   13965 start.go:93] Provisioning new machine with config: &{Name:flannel-864000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-864000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:22:34.616335   13965 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:22:34.625993   13965 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 16:22:34.669912   13965 start.go:159] libmachine.API.Create for "flannel-864000" (driver="qemu2")
	I1025 16:22:34.669967   13965 client.go:168] LocalClient.Create starting
	I1025 16:22:34.670107   13965 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:22:34.670194   13965 main.go:141] libmachine: Decoding PEM data...
	I1025 16:22:34.670210   13965 main.go:141] libmachine: Parsing certificate...
	I1025 16:22:34.670267   13965 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:22:34.670323   13965 main.go:141] libmachine: Decoding PEM data...
	I1025 16:22:34.670337   13965 main.go:141] libmachine: Parsing certificate...
	I1025 16:22:34.671167   13965 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:22:34.839729   13965 main.go:141] libmachine: Creating SSH key...
	I1025 16:22:35.064898   13965 main.go:141] libmachine: Creating Disk image...
	I1025 16:22:35.064912   13965 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:22:35.065160   13965 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/flannel-864000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/flannel-864000/disk.qcow2
	I1025 16:22:35.075612   13965 main.go:141] libmachine: STDOUT: 
	I1025 16:22:35.075633   13965 main.go:141] libmachine: STDERR: 
	I1025 16:22:35.075694   13965 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/flannel-864000/disk.qcow2 +20000M
	I1025 16:22:35.084150   13965 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:22:35.084179   13965 main.go:141] libmachine: STDERR: 
	I1025 16:22:35.084190   13965 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/flannel-864000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/flannel-864000/disk.qcow2
	I1025 16:22:35.084195   13965 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:22:35.084207   13965 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:22:35.084241   13965 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/flannel-864000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/flannel-864000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/flannel-864000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:4f:b4:9a:7c:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/flannel-864000/disk.qcow2
	I1025 16:22:35.086126   13965 main.go:141] libmachine: STDOUT: 
	I1025 16:22:35.086140   13965 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:22:35.086154   13965 client.go:171] duration metric: took 416.183875ms to LocalClient.Create
	I1025 16:22:37.088240   13965 start.go:128] duration metric: took 2.47189775s to createHost
	I1025 16:22:37.088291   13965 start.go:83] releasing machines lock for "flannel-864000", held for 2.472333834s
	W1025 16:22:37.088492   13965 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-864000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-864000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:22:37.102626   13965 out.go:201] 
	W1025 16:22:37.113553   13965 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:22:37.113590   13965 out.go:270] * 
	* 
	W1025 16:22:37.114879   13965 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 16:22:37.127863   13965 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-864000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-864000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.856979042s)

                                                
                                                
-- stdout --
	* [bridge-864000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-864000" primary control-plane node in "bridge-864000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-864000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:22:39.719995   14083 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:22:39.720123   14083 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:22:39.720126   14083 out.go:358] Setting ErrFile to fd 2...
	I1025 16:22:39.720128   14083 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:22:39.720274   14083 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:22:39.721492   14083 out.go:352] Setting JSON to false
	I1025 16:22:39.740787   14083 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7797,"bootTime":1729890762,"procs":512,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 16:22:39.740851   14083 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 16:22:39.745849   14083 out.go:177] * [bridge-864000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 16:22:39.753812   14083 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 16:22:39.753841   14083 notify.go:220] Checking for updates...
	I1025 16:22:39.761787   14083 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:22:39.764795   14083 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 16:22:39.767810   14083 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 16:22:39.774869   14083 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	I1025 16:22:39.783821   14083 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 16:22:39.787143   14083 config.go:182] Loaded profile config "multinode-747000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:22:39.787220   14083 config.go:182] Loaded profile config "stopped-upgrade-782000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 16:22:39.787265   14083 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 16:22:39.791846   14083 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 16:22:39.799765   14083 start.go:297] selected driver: qemu2
	I1025 16:22:39.799771   14083 start.go:901] validating driver "qemu2" against <nil>
	I1025 16:22:39.799777   14083 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 16:22:39.802366   14083 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 16:22:39.806809   14083 out.go:177] * Automatically selected the socket_vmnet network
	I1025 16:22:39.807999   14083 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 16:22:39.808025   14083 cni.go:84] Creating CNI manager for "bridge"
	I1025 16:22:39.808035   14083 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 16:22:39.808086   14083 start.go:340] cluster config:
	{Name:bridge-864000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:bridge-864000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:22:39.813087   14083 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:22:39.820807   14083 out.go:177] * Starting "bridge-864000" primary control-plane node in "bridge-864000" cluster
	I1025 16:22:39.824720   14083 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1025 16:22:39.824735   14083 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1025 16:22:39.824744   14083 cache.go:56] Caching tarball of preloaded images
	I1025 16:22:39.824817   14083 preload.go:172] Found /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 16:22:39.824823   14083 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1025 16:22:39.824883   14083 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/bridge-864000/config.json ...
	I1025 16:22:39.824893   14083 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/bridge-864000/config.json: {Name:mk427e803a2cf948bb03e949116b82fc3abe932b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:22:39.825264   14083 start.go:360] acquireMachinesLock for bridge-864000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:22:39.825312   14083 start.go:364] duration metric: took 41.834µs to acquireMachinesLock for "bridge-864000"
	I1025 16:22:39.825323   14083 start.go:93] Provisioning new machine with config: &{Name:bridge-864000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-864000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:22:39.825362   14083 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:22:39.828804   14083 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 16:22:39.843866   14083 start.go:159] libmachine.API.Create for "bridge-864000" (driver="qemu2")
	I1025 16:22:39.843896   14083 client.go:168] LocalClient.Create starting
	I1025 16:22:39.843977   14083 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:22:39.844014   14083 main.go:141] libmachine: Decoding PEM data...
	I1025 16:22:39.844028   14083 main.go:141] libmachine: Parsing certificate...
	I1025 16:22:39.844071   14083 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:22:39.844099   14083 main.go:141] libmachine: Decoding PEM data...
	I1025 16:22:39.844108   14083 main.go:141] libmachine: Parsing certificate...
	I1025 16:22:39.844537   14083 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:22:40.005666   14083 main.go:141] libmachine: Creating SSH key...
	I1025 16:22:40.131411   14083 main.go:141] libmachine: Creating Disk image...
	I1025 16:22:40.131419   14083 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:22:40.131647   14083 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/bridge-864000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/bridge-864000/disk.qcow2
	I1025 16:22:40.141818   14083 main.go:141] libmachine: STDOUT: 
	I1025 16:22:40.141843   14083 main.go:141] libmachine: STDERR: 
	I1025 16:22:40.141911   14083 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/bridge-864000/disk.qcow2 +20000M
	I1025 16:22:40.150598   14083 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:22:40.150614   14083 main.go:141] libmachine: STDERR: 
	I1025 16:22:40.150636   14083 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/bridge-864000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/bridge-864000/disk.qcow2
	I1025 16:22:40.150643   14083 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:22:40.150654   14083 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:22:40.150693   14083 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/bridge-864000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/bridge-864000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/bridge-864000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:9c:02:b2:e2:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/bridge-864000/disk.qcow2
	I1025 16:22:40.152579   14083 main.go:141] libmachine: STDOUT: 
	I1025 16:22:40.152591   14083 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:22:40.152611   14083 client.go:171] duration metric: took 308.710583ms to LocalClient.Create
	I1025 16:22:42.154800   14083 start.go:128] duration metric: took 2.329425333s to createHost
	I1025 16:22:42.154915   14083 start.go:83] releasing machines lock for "bridge-864000", held for 2.329604959s
	W1025 16:22:42.154982   14083 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:22:42.168196   14083 out.go:177] * Deleting "bridge-864000" in qemu2 ...
	W1025 16:22:42.189173   14083 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:22:42.189205   14083 start.go:729] Will try again in 5 seconds ...
	I1025 16:22:47.191352   14083 start.go:360] acquireMachinesLock for bridge-864000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:22:47.191712   14083 start.go:364] duration metric: took 281.875µs to acquireMachinesLock for "bridge-864000"
	I1025 16:22:47.191755   14083 start.go:93] Provisioning new machine with config: &{Name:bridge-864000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-864000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:22:47.191837   14083 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:22:47.201332   14083 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 16:22:47.228846   14083 start.go:159] libmachine.API.Create for "bridge-864000" (driver="qemu2")
	I1025 16:22:47.228885   14083 client.go:168] LocalClient.Create starting
	I1025 16:22:47.228988   14083 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:22:47.229040   14083 main.go:141] libmachine: Decoding PEM data...
	I1025 16:22:47.229054   14083 main.go:141] libmachine: Parsing certificate...
	I1025 16:22:47.229112   14083 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:22:47.229151   14083 main.go:141] libmachine: Decoding PEM data...
	I1025 16:22:47.229160   14083 main.go:141] libmachine: Parsing certificate...
	I1025 16:22:47.229538   14083 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:22:47.392354   14083 main.go:141] libmachine: Creating SSH key...
	I1025 16:22:47.485372   14083 main.go:141] libmachine: Creating Disk image...
	I1025 16:22:47.485379   14083 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:22:47.485569   14083 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/bridge-864000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/bridge-864000/disk.qcow2
	I1025 16:22:47.495412   14083 main.go:141] libmachine: STDOUT: 
	I1025 16:22:47.495433   14083 main.go:141] libmachine: STDERR: 
	I1025 16:22:47.495491   14083 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/bridge-864000/disk.qcow2 +20000M
	I1025 16:22:47.503992   14083 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:22:47.504006   14083 main.go:141] libmachine: STDERR: 
	I1025 16:22:47.504018   14083 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/bridge-864000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/bridge-864000/disk.qcow2
	I1025 16:22:47.504023   14083 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:22:47.504031   14083 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:22:47.504074   14083 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/bridge-864000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/bridge-864000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/bridge-864000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:62:d2:89:55:aa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/bridge-864000/disk.qcow2
	I1025 16:22:47.505952   14083 main.go:141] libmachine: STDOUT: 
	I1025 16:22:47.505976   14083 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:22:47.506002   14083 client.go:171] duration metric: took 277.103375ms to LocalClient.Create
	I1025 16:22:49.508197   14083 start.go:128] duration metric: took 2.316343875s to createHost
	I1025 16:22:49.508297   14083 start.go:83] releasing machines lock for "bridge-864000", held for 2.316575917s
	W1025 16:22:49.508710   14083 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-864000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-864000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:22:49.515461   14083 out.go:201] 
	W1025 16:22:49.521510   14083 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:22:49.521554   14083 out.go:270] * 
	* 
	W1025 16:22:49.524312   14083 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 16:22:49.536398   14083 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-864000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-864000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.823854916s)

                                                
                                                
-- stdout --
	* [kubenet-864000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-864000" primary control-plane node in "kubenet-864000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-864000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:22:51.924000   14194 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:22:51.924179   14194 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:22:51.924183   14194 out.go:358] Setting ErrFile to fd 2...
	I1025 16:22:51.924187   14194 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:22:51.924339   14194 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:22:51.925843   14194 out.go:352] Setting JSON to false
	I1025 16:22:51.944216   14194 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7809,"bootTime":1729890762,"procs":512,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 16:22:51.944301   14194 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 16:22:51.950483   14194 out.go:177] * [kubenet-864000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 16:22:51.957312   14194 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 16:22:51.957373   14194 notify.go:220] Checking for updates...
	I1025 16:22:51.964271   14194 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:22:51.967293   14194 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 16:22:51.970422   14194 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 16:22:51.973306   14194 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	I1025 16:22:51.976304   14194 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 16:22:51.979668   14194 config.go:182] Loaded profile config "multinode-747000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:22:51.979738   14194 config.go:182] Loaded profile config "stopped-upgrade-782000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 16:22:51.979786   14194 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 16:22:51.982248   14194 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 16:22:51.989365   14194 start.go:297] selected driver: qemu2
	I1025 16:22:51.989373   14194 start.go:901] validating driver "qemu2" against <nil>
	I1025 16:22:51.989379   14194 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 16:22:51.991932   14194 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 16:22:51.995297   14194 out.go:177] * Automatically selected the socket_vmnet network
	I1025 16:22:51.998352   14194 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 16:22:51.998367   14194 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1025 16:22:51.998398   14194 start.go:340] cluster config:
	{Name:kubenet-864000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubenet-864000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:22:52.002783   14194 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:22:52.010301   14194 out.go:177] * Starting "kubenet-864000" primary control-plane node in "kubenet-864000" cluster
	I1025 16:22:52.014329   14194 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1025 16:22:52.014346   14194 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1025 16:22:52.014358   14194 cache.go:56] Caching tarball of preloaded images
	I1025 16:22:52.014470   14194 preload.go:172] Found /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 16:22:52.014488   14194 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1025 16:22:52.014548   14194 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/kubenet-864000/config.json ...
	I1025 16:22:52.014558   14194 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/kubenet-864000/config.json: {Name:mkf91e1b3bdf893f933e7d33a678a242dfdad249 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:22:52.014788   14194 start.go:360] acquireMachinesLock for kubenet-864000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:22:52.014831   14194 start.go:364] duration metric: took 37.292µs to acquireMachinesLock for "kubenet-864000"
	I1025 16:22:52.014842   14194 start.go:93] Provisioning new machine with config: &{Name:kubenet-864000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-864000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:22:52.014887   14194 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:22:52.023328   14194 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 16:22:52.037781   14194 start.go:159] libmachine.API.Create for "kubenet-864000" (driver="qemu2")
	I1025 16:22:52.037816   14194 client.go:168] LocalClient.Create starting
	I1025 16:22:52.037889   14194 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:22:52.037928   14194 main.go:141] libmachine: Decoding PEM data...
	I1025 16:22:52.037938   14194 main.go:141] libmachine: Parsing certificate...
	I1025 16:22:52.037979   14194 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:22:52.038007   14194 main.go:141] libmachine: Decoding PEM data...
	I1025 16:22:52.038013   14194 main.go:141] libmachine: Parsing certificate...
	I1025 16:22:52.038402   14194 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:22:52.198483   14194 main.go:141] libmachine: Creating SSH key...
	I1025 16:22:52.298898   14194 main.go:141] libmachine: Creating Disk image...
	I1025 16:22:52.298907   14194 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:22:52.299108   14194 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubenet-864000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubenet-864000/disk.qcow2
	I1025 16:22:52.309140   14194 main.go:141] libmachine: STDOUT: 
	I1025 16:22:52.309164   14194 main.go:141] libmachine: STDERR: 
	I1025 16:22:52.309221   14194 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubenet-864000/disk.qcow2 +20000M
	I1025 16:22:52.317802   14194 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:22:52.317819   14194 main.go:141] libmachine: STDERR: 
	I1025 16:22:52.317836   14194 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubenet-864000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubenet-864000/disk.qcow2
	I1025 16:22:52.317842   14194 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:22:52.317854   14194 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:22:52.317886   14194 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubenet-864000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubenet-864000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubenet-864000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:d7:82:ac:a8:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubenet-864000/disk.qcow2
	I1025 16:22:52.320009   14194 main.go:141] libmachine: STDOUT: 
	I1025 16:22:52.320024   14194 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:22:52.320045   14194 client.go:171] duration metric: took 282.22325ms to LocalClient.Create
	I1025 16:22:54.322106   14194 start.go:128] duration metric: took 2.307227334s to createHost
	I1025 16:22:54.322176   14194 start.go:83] releasing machines lock for "kubenet-864000", held for 2.307335334s
	W1025 16:22:54.322198   14194 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:22:54.331928   14194 out.go:177] * Deleting "kubenet-864000" in qemu2 ...
	W1025 16:22:54.345487   14194 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:22:54.345495   14194 start.go:729] Will try again in 5 seconds ...
	I1025 16:22:59.347591   14194 start.go:360] acquireMachinesLock for kubenet-864000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:22:59.347909   14194 start.go:364] duration metric: took 271.209µs to acquireMachinesLock for "kubenet-864000"
	I1025 16:22:59.347942   14194 start.go:93] Provisioning new machine with config: &{Name:kubenet-864000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-864000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:22:59.348040   14194 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:22:59.362369   14194 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 16:22:59.396455   14194 start.go:159] libmachine.API.Create for "kubenet-864000" (driver="qemu2")
	I1025 16:22:59.396495   14194 client.go:168] LocalClient.Create starting
	I1025 16:22:59.396625   14194 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:22:59.396693   14194 main.go:141] libmachine: Decoding PEM data...
	I1025 16:22:59.396710   14194 main.go:141] libmachine: Parsing certificate...
	I1025 16:22:59.396765   14194 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:22:59.396816   14194 main.go:141] libmachine: Decoding PEM data...
	I1025 16:22:59.396829   14194 main.go:141] libmachine: Parsing certificate...
	I1025 16:22:59.397415   14194 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:22:59.565470   14194 main.go:141] libmachine: Creating SSH key...
	I1025 16:22:59.649263   14194 main.go:141] libmachine: Creating Disk image...
	I1025 16:22:59.649271   14194 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:22:59.649476   14194 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubenet-864000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubenet-864000/disk.qcow2
	I1025 16:22:59.659710   14194 main.go:141] libmachine: STDOUT: 
	I1025 16:22:59.659734   14194 main.go:141] libmachine: STDERR: 
	I1025 16:22:59.659802   14194 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubenet-864000/disk.qcow2 +20000M
	I1025 16:22:59.668330   14194 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:22:59.668347   14194 main.go:141] libmachine: STDERR: 
	I1025 16:22:59.668363   14194 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubenet-864000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubenet-864000/disk.qcow2
	I1025 16:22:59.668369   14194 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:22:59.668381   14194 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:22:59.668410   14194 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubenet-864000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubenet-864000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubenet-864000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:ed:85:4b:36:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/kubenet-864000/disk.qcow2
	I1025 16:22:59.670321   14194 main.go:141] libmachine: STDOUT: 
	I1025 16:22:59.670335   14194 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:22:59.670349   14194 client.go:171] duration metric: took 273.848625ms to LocalClient.Create
	I1025 16:23:01.672566   14194 start.go:128] duration metric: took 2.324501458s to createHost
	I1025 16:23:01.672661   14194 start.go:83] releasing machines lock for "kubenet-864000", held for 2.324752125s
	W1025 16:23:01.673069   14194 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-864000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-864000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:23:01.684851   14194 out.go:201] 
	W1025 16:23:01.688901   14194 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:23:01.688954   14194 out.go:270] * 
	* 
	W1025 16:23:01.691478   14194 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 16:23:01.701717   14194 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-213000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-213000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.873810625s)

                                                
                                                
-- stdout --
	* [old-k8s-version-213000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-213000" primary control-plane node in "old-k8s-version-213000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-213000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:23:04.119098   14303 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:23:04.119255   14303 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:23:04.119259   14303 out.go:358] Setting ErrFile to fd 2...
	I1025 16:23:04.119261   14303 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:23:04.119385   14303 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:23:04.120545   14303 out.go:352] Setting JSON to false
	I1025 16:23:04.138597   14303 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7822,"bootTime":1729890762,"procs":512,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 16:23:04.138664   14303 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 16:23:04.145134   14303 out.go:177] * [old-k8s-version-213000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 16:23:04.153275   14303 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 16:23:04.153366   14303 notify.go:220] Checking for updates...
	I1025 16:23:04.160232   14303 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:23:04.163247   14303 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 16:23:04.167210   14303 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 16:23:04.170222   14303 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	I1025 16:23:04.173261   14303 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 16:23:04.176653   14303 config.go:182] Loaded profile config "multinode-747000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:23:04.176730   14303 config.go:182] Loaded profile config "stopped-upgrade-782000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 16:23:04.176772   14303 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 16:23:04.181130   14303 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 16:23:04.188228   14303 start.go:297] selected driver: qemu2
	I1025 16:23:04.188235   14303 start.go:901] validating driver "qemu2" against <nil>
	I1025 16:23:04.188243   14303 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 16:23:04.190713   14303 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 16:23:04.194251   14303 out.go:177] * Automatically selected the socket_vmnet network
	I1025 16:23:04.197288   14303 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 16:23:04.197305   14303 cni.go:84] Creating CNI manager for ""
	I1025 16:23:04.197325   14303 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1025 16:23:04.197347   14303 start.go:340] cluster config:
	{Name:old-k8s-version-213000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-213000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:23:04.201820   14303 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:23:04.210222   14303 out.go:177] * Starting "old-k8s-version-213000" primary control-plane node in "old-k8s-version-213000" cluster
	I1025 16:23:04.214061   14303 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1025 16:23:04.214074   14303 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1025 16:23:04.214081   14303 cache.go:56] Caching tarball of preloaded images
	I1025 16:23:04.214150   14303 preload.go:172] Found /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 16:23:04.214156   14303 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1025 16:23:04.214236   14303 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/old-k8s-version-213000/config.json ...
	I1025 16:23:04.214247   14303 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/old-k8s-version-213000/config.json: {Name:mk17fb62ef2f3e94a746a435e8746b6f0a4051dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:23:04.214583   14303 start.go:360] acquireMachinesLock for old-k8s-version-213000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:23:04.214640   14303 start.go:364] duration metric: took 48.167µs to acquireMachinesLock for "old-k8s-version-213000"
	I1025 16:23:04.214654   14303 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-213000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-213000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:23:04.214690   14303 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:23:04.219254   14303 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 16:23:04.234210   14303 start.go:159] libmachine.API.Create for "old-k8s-version-213000" (driver="qemu2")
	I1025 16:23:04.234239   14303 client.go:168] LocalClient.Create starting
	I1025 16:23:04.234330   14303 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:23:04.234370   14303 main.go:141] libmachine: Decoding PEM data...
	I1025 16:23:04.234384   14303 main.go:141] libmachine: Parsing certificate...
	I1025 16:23:04.234420   14303 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:23:04.234449   14303 main.go:141] libmachine: Decoding PEM data...
	I1025 16:23:04.234456   14303 main.go:141] libmachine: Parsing certificate...
	I1025 16:23:04.234836   14303 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:23:04.396427   14303 main.go:141] libmachine: Creating SSH key...
	I1025 16:23:04.563387   14303 main.go:141] libmachine: Creating Disk image...
	I1025 16:23:04.563396   14303 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:23:04.563622   14303 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/old-k8s-version-213000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/old-k8s-version-213000/disk.qcow2
	I1025 16:23:04.573995   14303 main.go:141] libmachine: STDOUT: 
	I1025 16:23:04.574017   14303 main.go:141] libmachine: STDERR: 
	I1025 16:23:04.574088   14303 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/old-k8s-version-213000/disk.qcow2 +20000M
	I1025 16:23:04.582721   14303 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:23:04.582735   14303 main.go:141] libmachine: STDERR: 
	I1025 16:23:04.582751   14303 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/old-k8s-version-213000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/old-k8s-version-213000/disk.qcow2
	I1025 16:23:04.582755   14303 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:23:04.582766   14303 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:23:04.582793   14303 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/old-k8s-version-213000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/old-k8s-version-213000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/old-k8s-version-213000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:da:da:cc:84:62 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/old-k8s-version-213000/disk.qcow2
	I1025 16:23:04.584656   14303 main.go:141] libmachine: STDOUT: 
	I1025 16:23:04.584673   14303 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:23:04.584695   14303 client.go:171] duration metric: took 350.451667ms to LocalClient.Create
	I1025 16:23:06.586842   14303 start.go:128] duration metric: took 2.372146375s to createHost
	I1025 16:23:06.586911   14303 start.go:83] releasing machines lock for "old-k8s-version-213000", held for 2.372278709s
	W1025 16:23:06.586966   14303 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:23:06.599729   14303 out.go:177] * Deleting "old-k8s-version-213000" in qemu2 ...
	W1025 16:23:06.621894   14303 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:23:06.621923   14303 start.go:729] Will try again in 5 seconds ...
	I1025 16:23:11.624116   14303 start.go:360] acquireMachinesLock for old-k8s-version-213000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:23:11.624678   14303 start.go:364] duration metric: took 468.208µs to acquireMachinesLock for "old-k8s-version-213000"
	I1025 16:23:11.624815   14303 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-213000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-213000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:23:11.625040   14303 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:23:11.635646   14303 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 16:23:11.681708   14303 start.go:159] libmachine.API.Create for "old-k8s-version-213000" (driver="qemu2")
	I1025 16:23:11.681765   14303 client.go:168] LocalClient.Create starting
	I1025 16:23:11.681908   14303 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:23:11.681990   14303 main.go:141] libmachine: Decoding PEM data...
	I1025 16:23:11.682009   14303 main.go:141] libmachine: Parsing certificate...
	I1025 16:23:11.682077   14303 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:23:11.682134   14303 main.go:141] libmachine: Decoding PEM data...
	I1025 16:23:11.682160   14303 main.go:141] libmachine: Parsing certificate...
	I1025 16:23:11.682746   14303 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:23:11.851731   14303 main.go:141] libmachine: Creating SSH key...
	I1025 16:23:11.893046   14303 main.go:141] libmachine: Creating Disk image...
	I1025 16:23:11.893052   14303 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:23:11.893262   14303 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/old-k8s-version-213000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/old-k8s-version-213000/disk.qcow2
	I1025 16:23:11.903303   14303 main.go:141] libmachine: STDOUT: 
	I1025 16:23:11.903328   14303 main.go:141] libmachine: STDERR: 
	I1025 16:23:11.903390   14303 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/old-k8s-version-213000/disk.qcow2 +20000M
	I1025 16:23:11.911948   14303 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:23:11.911966   14303 main.go:141] libmachine: STDERR: 
	I1025 16:23:11.911978   14303 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/old-k8s-version-213000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/old-k8s-version-213000/disk.qcow2
	I1025 16:23:11.911981   14303 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:23:11.911991   14303 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:23:11.912014   14303 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/old-k8s-version-213000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/old-k8s-version-213000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/old-k8s-version-213000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:60:31:42:8c:18 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/old-k8s-version-213000/disk.qcow2
	I1025 16:23:11.913846   14303 main.go:141] libmachine: STDOUT: 
	I1025 16:23:11.913863   14303 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:23:11.913884   14303 client.go:171] duration metric: took 232.106041ms to LocalClient.Create
	I1025 16:23:13.916156   14303 start.go:128] duration metric: took 2.29109225s to createHost
	I1025 16:23:13.916230   14303 start.go:83] releasing machines lock for "old-k8s-version-213000", held for 2.291546417s
	W1025 16:23:13.916568   14303 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-213000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-213000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:23:13.931313   14303 out.go:201] 
	W1025 16:23:13.934386   14303 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:23:13.934404   14303 out.go:270] * 
	* 
	W1025 16:23:13.936594   14303 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 16:23:13.949303   14303 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-213000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-213000 -n old-k8s-version-213000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-213000 -n old-k8s-version-213000: exit status 7 (69.528541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-213000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-213000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-213000 create -f testdata/busybox.yaml: exit status 1 (29.644542ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-213000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-213000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-213000 -n old-k8s-version-213000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-213000 -n old-k8s-version-213000: exit status 7 (34.970166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-213000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-213000 -n old-k8s-version-213000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-213000 -n old-k8s-version-213000: exit status 7 (33.3905ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-213000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-213000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-213000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-213000 describe deploy/metrics-server -n kube-system: exit status 1 (29.229667ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-213000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-213000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-213000 -n old-k8s-version-213000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-213000 -n old-k8s-version-213000: exit status 7 (33.389958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-213000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-213000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-213000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.194304125s)

                                                
                                                
-- stdout --
	* [old-k8s-version-213000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-213000" primary control-plane node in "old-k8s-version-213000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-213000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-213000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:23:16.434068   14349 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:23:16.434218   14349 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:23:16.434221   14349 out.go:358] Setting ErrFile to fd 2...
	I1025 16:23:16.434224   14349 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:23:16.434363   14349 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:23:16.435486   14349 out.go:352] Setting JSON to false
	I1025 16:23:16.453255   14349 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7834,"bootTime":1729890762,"procs":514,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 16:23:16.453335   14349 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 16:23:16.458345   14349 out.go:177] * [old-k8s-version-213000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 16:23:16.466326   14349 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 16:23:16.466406   14349 notify.go:220] Checking for updates...
	I1025 16:23:16.473257   14349 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:23:16.477305   14349 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 16:23:16.480266   14349 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 16:23:16.483267   14349 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	I1025 16:23:16.486275   14349 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 16:23:16.489602   14349 config.go:182] Loaded profile config "old-k8s-version-213000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1025 16:23:16.493211   14349 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1025 16:23:16.496337   14349 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 16:23:16.499249   14349 out.go:177] * Using the qemu2 driver based on existing profile
	I1025 16:23:16.506302   14349 start.go:297] selected driver: qemu2
	I1025 16:23:16.506308   14349 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-213000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-213000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:23:16.506356   14349 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 16:23:16.508981   14349 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 16:23:16.509008   14349 cni.go:84] Creating CNI manager for ""
	I1025 16:23:16.509027   14349 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1025 16:23:16.509052   14349 start.go:340] cluster config:
	{Name:old-k8s-version-213000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-213000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:23:16.513478   14349 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:23:16.522253   14349 out.go:177] * Starting "old-k8s-version-213000" primary control-plane node in "old-k8s-version-213000" cluster
	I1025 16:23:16.525246   14349 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1025 16:23:16.525263   14349 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1025 16:23:16.525272   14349 cache.go:56] Caching tarball of preloaded images
	I1025 16:23:16.525348   14349 preload.go:172] Found /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 16:23:16.525359   14349 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1025 16:23:16.525409   14349 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/old-k8s-version-213000/config.json ...
	I1025 16:23:16.525847   14349 start.go:360] acquireMachinesLock for old-k8s-version-213000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:23:16.525878   14349 start.go:364] duration metric: took 24.459µs to acquireMachinesLock for "old-k8s-version-213000"
	I1025 16:23:16.525886   14349 start.go:96] Skipping create...Using existing machine configuration
	I1025 16:23:16.525891   14349 fix.go:54] fixHost starting: 
	I1025 16:23:16.526011   14349 fix.go:112] recreateIfNeeded on old-k8s-version-213000: state=Stopped err=<nil>
	W1025 16:23:16.526019   14349 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 16:23:16.529337   14349 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-213000" ...
	I1025 16:23:16.535307   14349 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:23:16.535349   14349 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/old-k8s-version-213000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/old-k8s-version-213000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/old-k8s-version-213000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:60:31:42:8c:18 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/old-k8s-version-213000/disk.qcow2
	I1025 16:23:16.537498   14349 main.go:141] libmachine: STDOUT: 
	I1025 16:23:16.537518   14349 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:23:16.537547   14349 fix.go:56] duration metric: took 11.6545ms for fixHost
	I1025 16:23:16.537553   14349 start.go:83] releasing machines lock for "old-k8s-version-213000", held for 11.670333ms
	W1025 16:23:16.537558   14349 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:23:16.537599   14349 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:23:16.537603   14349 start.go:729] Will try again in 5 seconds ...
	I1025 16:23:21.539847   14349 start.go:360] acquireMachinesLock for old-k8s-version-213000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:23:21.540392   14349 start.go:364] duration metric: took 426.625µs to acquireMachinesLock for "old-k8s-version-213000"
	I1025 16:23:21.540481   14349 start.go:96] Skipping create...Using existing machine configuration
	I1025 16:23:21.540504   14349 fix.go:54] fixHost starting: 
	I1025 16:23:21.541265   14349 fix.go:112] recreateIfNeeded on old-k8s-version-213000: state=Stopped err=<nil>
	W1025 16:23:21.541295   14349 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 16:23:21.546160   14349 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-213000" ...
	I1025 16:23:21.552954   14349 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:23:21.553192   14349 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/old-k8s-version-213000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/old-k8s-version-213000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/old-k8s-version-213000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:60:31:42:8c:18 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/old-k8s-version-213000/disk.qcow2
	I1025 16:23:21.563657   14349 main.go:141] libmachine: STDOUT: 
	I1025 16:23:21.563712   14349 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:23:21.563798   14349 fix.go:56] duration metric: took 23.295625ms for fixHost
	I1025 16:23:21.563818   14349 start.go:83] releasing machines lock for "old-k8s-version-213000", held for 23.401958ms
	W1025 16:23:21.564012   14349 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-213000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-213000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:23:21.570952   14349 out.go:201] 
	W1025 16:23:21.575045   14349 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:23:21.575077   14349 out.go:270] * 
	* 
	W1025 16:23:21.577395   14349 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 16:23:21.583946   14349 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-213000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-213000 -n old-k8s-version-213000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-213000 -n old-k8s-version-213000: exit status 7 (67.723959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-213000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-213000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-213000 -n old-k8s-version-213000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-213000 -n old-k8s-version-213000: exit status 7 (34.905875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-213000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-213000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-213000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-213000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.819375ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-213000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-213000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-213000 -n old-k8s-version-213000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-213000 -n old-k8s-version-213000: exit status 7 (33.892ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-213000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-213000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-213000 -n old-k8s-version-213000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-213000 -n old-k8s-version-213000: exit status 7 (34.97875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-213000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-213000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-213000 --alsologtostderr -v=1: exit status 83 (45.071125ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-213000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-213000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:23:21.876296   14368 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:23:21.877241   14368 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:23:21.877245   14368 out.go:358] Setting ErrFile to fd 2...
	I1025 16:23:21.877247   14368 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:23:21.877370   14368 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:23:21.877584   14368 out.go:352] Setting JSON to false
	I1025 16:23:21.877597   14368 mustload.go:65] Loading cluster: old-k8s-version-213000
	I1025 16:23:21.877814   14368 config.go:182] Loaded profile config "old-k8s-version-213000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1025 16:23:21.882692   14368 out.go:177] * The control-plane node old-k8s-version-213000 host is not running: state=Stopped
	I1025 16:23:21.885550   14368 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-213000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-213000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-213000 -n old-k8s-version-213000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-213000 -n old-k8s-version-213000: exit status 7 (34.557166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-213000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-213000 -n old-k8s-version-213000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-213000 -n old-k8s-version-213000: exit status 7 (34.007417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-213000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-140000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-140000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.946055583s)

                                                
                                                
-- stdout --
	* [no-preload-140000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-140000" primary control-plane node in "no-preload-140000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-140000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:23:22.219064   14385 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:23:22.219230   14385 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:23:22.219238   14385 out.go:358] Setting ErrFile to fd 2...
	I1025 16:23:22.219241   14385 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:23:22.219382   14385 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:23:22.220571   14385 out.go:352] Setting JSON to false
	I1025 16:23:22.239035   14385 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7840,"bootTime":1729890762,"procs":513,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 16:23:22.239119   14385 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 16:23:22.244067   14385 out.go:177] * [no-preload-140000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 16:23:22.251019   14385 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 16:23:22.251072   14385 notify.go:220] Checking for updates...
	I1025 16:23:22.256971   14385 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:23:22.259989   14385 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 16:23:22.263051   14385 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 16:23:22.265959   14385 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	I1025 16:23:22.275059   14385 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 16:23:22.279260   14385 config.go:182] Loaded profile config "multinode-747000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:23:22.279326   14385 config.go:182] Loaded profile config "stopped-upgrade-782000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 16:23:22.279377   14385 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 16:23:22.282994   14385 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 16:23:22.289972   14385 start.go:297] selected driver: qemu2
	I1025 16:23:22.289989   14385 start.go:901] validating driver "qemu2" against <nil>
	I1025 16:23:22.289997   14385 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 16:23:22.293203   14385 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 16:23:22.296024   14385 out.go:177] * Automatically selected the socket_vmnet network
	I1025 16:23:22.299133   14385 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 16:23:22.299170   14385 cni.go:84] Creating CNI manager for ""
	I1025 16:23:22.299197   14385 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 16:23:22.299205   14385 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 16:23:22.299259   14385 start.go:340] cluster config:
	{Name:no-preload-140000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-140000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:23:22.304833   14385 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:23:22.308965   14385 out.go:177] * Starting "no-preload-140000" primary control-plane node in "no-preload-140000" cluster
	I1025 16:23:22.316790   14385 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1025 16:23:22.316866   14385 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/no-preload-140000/config.json ...
	I1025 16:23:22.316883   14385 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/no-preload-140000/config.json: {Name:mk7025425a068b6cf36137a7cd2d7779580753c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:23:22.316888   14385 cache.go:107] acquiring lock: {Name:mka77912f6392ad84bb54095bdca3bc598633fbe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:23:22.316929   14385 cache.go:107] acquiring lock: {Name:mkc62f2582073f752e03792759daea1f1d9d664f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:23:22.316950   14385 cache.go:107] acquiring lock: {Name:mk8f42c99510a43f96d9991b4d8f5bee12515fe0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:23:22.316981   14385 cache.go:107] acquiring lock: {Name:mk74f0968a9fa19d9a370578cc0953de2c3948a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:23:22.317044   14385 cache.go:107] acquiring lock: {Name:mk3c61ae936287fa73a8d8900e8b44bb3f168345 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:23:22.317034   14385 cache.go:107] acquiring lock: {Name:mk8c773bec036909aad76149378bad80d32cd7c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:23:22.316917   14385 cache.go:107] acquiring lock: {Name:mk4704346cfb02ee5f3842671aca45179717c7d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:23:22.317095   14385 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1025 16:23:22.317146   14385 cache.go:107] acquiring lock: {Name:mk39d74bb003cbba6228c6c7f277ba11d98ac46f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:23:22.317227   14385 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1025 16:23:22.317238   14385 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1025 16:23:22.317245   14385 cache.go:115] /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1025 16:23:22.317260   14385 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 372.959µs
	I1025 16:23:22.317274   14385 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1025 16:23:22.317380   14385 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1025 16:23:22.317442   14385 start.go:360] acquireMachinesLock for no-preload-140000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:23:22.317462   14385 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1025 16:23:22.317465   14385 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1025 16:23:22.317487   14385 start.go:364] duration metric: took 40.125µs to acquireMachinesLock for "no-preload-140000"
	I1025 16:23:22.317499   14385 start.go:93] Provisioning new machine with config: &{Name:no-preload-140000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-140000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:23:22.317535   14385 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:23:22.317582   14385 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1025 16:23:22.325969   14385 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 16:23:22.329390   14385 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1025 16:23:22.329962   14385 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1025 16:23:22.329969   14385 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1025 16:23:22.330054   14385 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1025 16:23:22.330061   14385 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1025 16:23:22.330091   14385 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1025 16:23:22.330107   14385 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1025 16:23:22.341511   14385 start.go:159] libmachine.API.Create for "no-preload-140000" (driver="qemu2")
	I1025 16:23:22.341534   14385 client.go:168] LocalClient.Create starting
	I1025 16:23:22.341619   14385 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:23:22.341656   14385 main.go:141] libmachine: Decoding PEM data...
	I1025 16:23:22.341668   14385 main.go:141] libmachine: Parsing certificate...
	I1025 16:23:22.341714   14385 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:23:22.341745   14385 main.go:141] libmachine: Decoding PEM data...
	I1025 16:23:22.341754   14385 main.go:141] libmachine: Parsing certificate...
	I1025 16:23:22.342115   14385 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:23:22.507150   14385 main.go:141] libmachine: Creating SSH key...
	I1025 16:23:22.588528   14385 main.go:141] libmachine: Creating Disk image...
	I1025 16:23:22.588544   14385 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:23:22.588764   14385 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/no-preload-140000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/no-preload-140000/disk.qcow2
	I1025 16:23:22.598792   14385 main.go:141] libmachine: STDOUT: 
	I1025 16:23:22.598820   14385 main.go:141] libmachine: STDERR: 
	I1025 16:23:22.598890   14385 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/no-preload-140000/disk.qcow2 +20000M
	I1025 16:23:22.608618   14385 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:23:22.608638   14385 main.go:141] libmachine: STDERR: 
	I1025 16:23:22.608660   14385 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/no-preload-140000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/no-preload-140000/disk.qcow2
	I1025 16:23:22.608664   14385 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:23:22.608676   14385 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:23:22.608703   14385 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/no-preload-140000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/no-preload-140000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/no-preload-140000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:dc:59:96:e2:91 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/no-preload-140000/disk.qcow2
	I1025 16:23:22.610777   14385 main.go:141] libmachine: STDOUT: 
	I1025 16:23:22.610792   14385 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:23:22.610813   14385 client.go:171] duration metric: took 269.275833ms to LocalClient.Create
	I1025 16:23:22.774208   14385 cache.go:162] opening:  /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1
	I1025 16:23:22.788139   14385 cache.go:162] opening:  /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1025 16:23:22.822097   14385 cache.go:162] opening:  /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I1025 16:23:22.836296   14385 cache.go:162] opening:  /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1
	I1025 16:23:22.913746   14385 cache.go:162] opening:  /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I1025 16:23:22.968156   14385 cache.go:162] opening:  /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1
	I1025 16:23:23.074862   14385 cache.go:162] opening:  /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I1025 16:23:23.211089   14385 cache.go:157] /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1025 16:23:23.211116   14385 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 894.190917ms
	I1025 16:23:23.211130   14385 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1025 16:23:24.610958   14385 start.go:128] duration metric: took 2.293423333s to createHost
	I1025 16:23:24.610994   14385 start.go:83] releasing machines lock for "no-preload-140000", held for 2.293517667s
	W1025 16:23:24.611021   14385 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:23:24.626956   14385 out.go:177] * Deleting "no-preload-140000" in qemu2 ...
	W1025 16:23:24.644571   14385 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:23:24.644585   14385 start.go:729] Will try again in 5 seconds ...
	I1025 16:23:25.810717   14385 cache.go:157] /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1025 16:23:25.810778   14385 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 3.493854625s
	I1025 16:23:25.810802   14385 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1025 16:23:26.304616   14385 cache.go:157] /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I1025 16:23:26.304633   14385 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 3.9876795s
	I1025 16:23:26.304667   14385 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I1025 16:23:26.850863   14385 cache.go:157] /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I1025 16:23:26.850877   14385 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 4.533927791s
	I1025 16:23:26.850884   14385 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I1025 16:23:27.751528   14385 cache.go:157] /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I1025 16:23:27.751555   14385 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 5.434570917s
	I1025 16:23:27.751580   14385 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I1025 16:23:27.841907   14385 cache.go:157] /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I1025 16:23:27.841923   14385 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 5.524971333s
	I1025 16:23:27.841931   14385 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I1025 16:23:29.644972   14385 start.go:360] acquireMachinesLock for no-preload-140000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:23:29.645583   14385 start.go:364] duration metric: took 484.25µs to acquireMachinesLock for "no-preload-140000"
	I1025 16:23:29.645730   14385 start.go:93] Provisioning new machine with config: &{Name:no-preload-140000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-140000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:23:29.645964   14385 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:23:29.656795   14385 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 16:23:29.705735   14385 start.go:159] libmachine.API.Create for "no-preload-140000" (driver="qemu2")
	I1025 16:23:29.705798   14385 client.go:168] LocalClient.Create starting
	I1025 16:23:29.705979   14385 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:23:29.706063   14385 main.go:141] libmachine: Decoding PEM data...
	I1025 16:23:29.706085   14385 main.go:141] libmachine: Parsing certificate...
	I1025 16:23:29.706167   14385 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:23:29.706225   14385 main.go:141] libmachine: Decoding PEM data...
	I1025 16:23:29.706241   14385 main.go:141] libmachine: Parsing certificate...
	I1025 16:23:29.706804   14385 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:23:29.875071   14385 main.go:141] libmachine: Creating SSH key...
	I1025 16:23:30.065225   14385 main.go:141] libmachine: Creating Disk image...
	I1025 16:23:30.065238   14385 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:23:30.065485   14385 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/no-preload-140000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/no-preload-140000/disk.qcow2
	I1025 16:23:30.076033   14385 main.go:141] libmachine: STDOUT: 
	I1025 16:23:30.076070   14385 main.go:141] libmachine: STDERR: 
	I1025 16:23:30.076151   14385 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/no-preload-140000/disk.qcow2 +20000M
	I1025 16:23:30.084892   14385 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:23:30.084922   14385 main.go:141] libmachine: STDERR: 
	I1025 16:23:30.084935   14385 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/no-preload-140000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/no-preload-140000/disk.qcow2
	I1025 16:23:30.084948   14385 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:23:30.084959   14385 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:23:30.084996   14385 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/no-preload-140000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/no-preload-140000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/no-preload-140000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:f0:d5:94:df:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/no-preload-140000/disk.qcow2
	I1025 16:23:30.086933   14385 main.go:141] libmachine: STDOUT: 
	I1025 16:23:30.086959   14385 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:23:30.086979   14385 client.go:171] duration metric: took 381.176709ms to LocalClient.Create
	I1025 16:23:31.096359   14385 cache.go:157] /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I1025 16:23:31.096425   14385 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 8.779566791s
	I1025 16:23:31.096443   14385 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I1025 16:23:31.096514   14385 cache.go:87] Successfully saved all images to host disk.
	I1025 16:23:32.089236   14385 start.go:128] duration metric: took 2.443228542s to createHost
	I1025 16:23:32.089315   14385 start.go:83] releasing machines lock for "no-preload-140000", held for 2.44372s
	W1025 16:23:32.089615   14385 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-140000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-140000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:23:32.098066   14385 out.go:201] 
	W1025 16:23:32.104253   14385 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:23:32.104308   14385 out.go:270] * 
	* 
	W1025 16:23:32.106815   14385 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 16:23:32.117083   14385 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-140000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-140000 -n no-preload-140000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-140000 -n no-preload-140000: exit status 7 (68.041459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-140000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-140000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-140000 create -f testdata/busybox.yaml: exit status 1 (30.236542ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-140000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-140000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-140000 -n no-preload-140000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-140000 -n no-preload-140000: exit status 7 (33.563542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-140000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-140000 -n no-preload-140000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-140000 -n no-preload-140000: exit status 7 (33.168792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-140000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-140000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-140000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-140000 describe deploy/metrics-server -n kube-system: exit status 1 (27.428375ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-140000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-140000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-140000 -n no-preload-140000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-140000 -n no-preload-140000: exit status 7 (34.418958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-140000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-140000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-140000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.185826792s)

                                                
                                                
-- stdout --
	* [no-preload-140000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-140000" primary control-plane node in "no-preload-140000" cluster
	* Restarting existing qemu2 VM for "no-preload-140000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-140000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:23:35.575454   14468 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:23:35.575620   14468 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:23:35.575624   14468 out.go:358] Setting ErrFile to fd 2...
	I1025 16:23:35.575626   14468 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:23:35.575747   14468 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:23:35.576806   14468 out.go:352] Setting JSON to false
	I1025 16:23:35.594873   14468 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7853,"bootTime":1729890762,"procs":514,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 16:23:35.594970   14468 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 16:23:35.600189   14468 out.go:177] * [no-preload-140000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 16:23:35.607145   14468 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 16:23:35.607258   14468 notify.go:220] Checking for updates...
	I1025 16:23:35.614114   14468 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:23:35.617128   14468 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 16:23:35.620181   14468 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 16:23:35.621547   14468 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	I1025 16:23:35.624153   14468 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 16:23:35.627491   14468 config.go:182] Loaded profile config "no-preload-140000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:23:35.627750   14468 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 16:23:35.629410   14468 out.go:177] * Using the qemu2 driver based on existing profile
	I1025 16:23:35.636194   14468 start.go:297] selected driver: qemu2
	I1025 16:23:35.636202   14468 start.go:901] validating driver "qemu2" against &{Name:no-preload-140000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:no-preload-140000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:23:35.636298   14468 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 16:23:35.638805   14468 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 16:23:35.638836   14468 cni.go:84] Creating CNI manager for ""
	I1025 16:23:35.638862   14468 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 16:23:35.638889   14468 start.go:340] cluster config:
	{Name:no-preload-140000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-140000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:23:35.643218   14468 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:23:35.651168   14468 out.go:177] * Starting "no-preload-140000" primary control-plane node in "no-preload-140000" cluster
	I1025 16:23:35.655219   14468 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1025 16:23:35.655290   14468 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/no-preload-140000/config.json ...
	I1025 16:23:35.655299   14468 cache.go:107] acquiring lock: {Name:mka77912f6392ad84bb54095bdca3bc598633fbe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:23:35.655300   14468 cache.go:107] acquiring lock: {Name:mk39d74bb003cbba6228c6c7f277ba11d98ac46f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:23:35.655305   14468 cache.go:107] acquiring lock: {Name:mk3c61ae936287fa73a8d8900e8b44bb3f168345 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:23:35.655324   14468 cache.go:107] acquiring lock: {Name:mk74f0968a9fa19d9a370578cc0953de2c3948a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:23:35.655336   14468 cache.go:107] acquiring lock: {Name:mk8f42c99510a43f96d9991b4d8f5bee12515fe0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:23:35.655396   14468 cache.go:115] /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I1025 16:23:35.655403   14468 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 116.167µs
	I1025 16:23:35.655410   14468 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I1025 16:23:35.655413   14468 cache.go:115] /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I1025 16:23:35.655417   14468 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 94.042µs
	I1025 16:23:35.655421   14468 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I1025 16:23:35.655421   14468 cache.go:115] /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1025 16:23:35.655426   14468 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 90.584µs
	I1025 16:23:35.655430   14468 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1025 16:23:35.655465   14468 cache.go:115] /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1025 16:23:35.655469   14468 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 175.708µs
	I1025 16:23:35.655473   14468 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1025 16:23:35.655465   14468 cache.go:107] acquiring lock: {Name:mk4704346cfb02ee5f3842671aca45179717c7d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:23:35.655470   14468 cache.go:107] acquiring lock: {Name:mk8c773bec036909aad76149378bad80d32cd7c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:23:35.655505   14468 cache.go:107] acquiring lock: {Name:mkc62f2582073f752e03792759daea1f1d9d664f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:23:35.655555   14468 cache.go:115] /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I1025 16:23:35.655560   14468 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 115.916µs
	I1025 16:23:35.655567   14468 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I1025 16:23:35.655585   14468 cache.go:115] /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I1025 16:23:35.655590   14468 cache.go:115] /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I1025 16:23:35.655589   14468 cache.go:115] /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1025 16:23:35.655592   14468 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 297.583µs
	I1025 16:23:35.655597   14468 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I1025 16:23:35.655593   14468 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 168.416µs
	I1025 16:23:35.655605   14468 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I1025 16:23:35.655597   14468 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 190.917µs
	I1025 16:23:35.655608   14468 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1025 16:23:35.655610   14468 cache.go:87] Successfully saved all images to host disk.
	I1025 16:23:35.655659   14468 start.go:360] acquireMachinesLock for no-preload-140000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:23:35.655691   14468 start.go:364] duration metric: took 25.667µs to acquireMachinesLock for "no-preload-140000"
	I1025 16:23:35.655700   14468 start.go:96] Skipping create...Using existing machine configuration
	I1025 16:23:35.655703   14468 fix.go:54] fixHost starting: 
	I1025 16:23:35.655818   14468 fix.go:112] recreateIfNeeded on no-preload-140000: state=Stopped err=<nil>
	W1025 16:23:35.655826   14468 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 16:23:35.664134   14468 out.go:177] * Restarting existing qemu2 VM for "no-preload-140000" ...
	I1025 16:23:35.668132   14468 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:23:35.668182   14468 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/no-preload-140000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/no-preload-140000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/no-preload-140000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:f0:d5:94:df:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/no-preload-140000/disk.qcow2
	I1025 16:23:35.670378   14468 main.go:141] libmachine: STDOUT: 
	I1025 16:23:35.670402   14468 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:23:35.670425   14468 fix.go:56] duration metric: took 14.71975ms for fixHost
	I1025 16:23:35.670428   14468 start.go:83] releasing machines lock for "no-preload-140000", held for 14.733333ms
	W1025 16:23:35.670434   14468 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:23:35.670475   14468 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:23:35.670479   14468 start.go:729] Will try again in 5 seconds ...
	I1025 16:23:40.672635   14468 start.go:360] acquireMachinesLock for no-preload-140000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:23:40.673130   14468 start.go:364] duration metric: took 410.292µs to acquireMachinesLock for "no-preload-140000"
	I1025 16:23:40.673265   14468 start.go:96] Skipping create...Using existing machine configuration
	I1025 16:23:40.673285   14468 fix.go:54] fixHost starting: 
	I1025 16:23:40.674042   14468 fix.go:112] recreateIfNeeded on no-preload-140000: state=Stopped err=<nil>
	W1025 16:23:40.674067   14468 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 16:23:40.677728   14468 out.go:177] * Restarting existing qemu2 VM for "no-preload-140000" ...
	I1025 16:23:40.684540   14468 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:23:40.684693   14468 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/no-preload-140000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/no-preload-140000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/no-preload-140000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:f0:d5:94:df:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/no-preload-140000/disk.qcow2
	I1025 16:23:40.695664   14468 main.go:141] libmachine: STDOUT: 
	I1025 16:23:40.695726   14468 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:23:40.695809   14468 fix.go:56] duration metric: took 22.524833ms for fixHost
	I1025 16:23:40.695822   14468 start.go:83] releasing machines lock for "no-preload-140000", held for 22.667542ms
	W1025 16:23:40.696009   14468 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-140000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-140000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:23:40.703516   14468 out.go:201] 
	W1025 16:23:40.706580   14468 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:23:40.706622   14468 out.go:270] * 
	* 
	W1025 16:23:40.709374   14468 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 16:23:40.717524   14468 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-140000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-140000 -n no-preload-140000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-140000 -n no-preload-140000: exit status 7 (64.683875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-140000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-140000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-140000 -n no-preload-140000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-140000 -n no-preload-140000: exit status 7 (35.518833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-140000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-140000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-140000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-140000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.945958ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-140000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-140000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-140000 -n no-preload-140000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-140000 -n no-preload-140000: exit status 7 (33.633042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-140000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-140000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-140000 -n no-preload-140000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-140000 -n no-preload-140000: exit status 7 (33.730792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-140000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-140000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-140000 --alsologtostderr -v=1: exit status 83 (44.617708ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-140000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-140000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:23:41.003253   14488 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:23:41.003450   14488 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:23:41.003453   14488 out.go:358] Setting ErrFile to fd 2...
	I1025 16:23:41.003455   14488 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:23:41.003599   14488 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:23:41.003823   14488 out.go:352] Setting JSON to false
	I1025 16:23:41.003830   14488 mustload.go:65] Loading cluster: no-preload-140000
	I1025 16:23:41.004043   14488 config.go:182] Loaded profile config "no-preload-140000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:23:41.008878   14488 out.go:177] * The control-plane node no-preload-140000 host is not running: state=Stopped
	I1025 16:23:41.011880   14488 out.go:177]   To start a cluster, run: "minikube start -p no-preload-140000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-140000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-140000 -n no-preload-140000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-140000 -n no-preload-140000: exit status 7 (33.377542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-140000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-140000 -n no-preload-140000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-140000 -n no-preload-140000: exit status 7 (34.192083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-140000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-409000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-409000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.944384875s)

                                                
                                                
-- stdout --
	* [embed-certs-409000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-409000" primary control-plane node in "embed-certs-409000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-409000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:23:41.348380   14505 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:23:41.348540   14505 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:23:41.348543   14505 out.go:358] Setting ErrFile to fd 2...
	I1025 16:23:41.348545   14505 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:23:41.348684   14505 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:23:41.349864   14505 out.go:352] Setting JSON to false
	I1025 16:23:41.367629   14505 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7859,"bootTime":1729890762,"procs":515,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 16:23:41.367706   14505 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 16:23:41.372426   14505 out.go:177] * [embed-certs-409000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 16:23:41.378387   14505 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 16:23:41.378486   14505 notify.go:220] Checking for updates...
	I1025 16:23:41.385352   14505 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:23:41.388320   14505 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 16:23:41.391387   14505 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 16:23:41.394364   14505 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	I1025 16:23:41.397308   14505 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 16:23:41.400658   14505 config.go:182] Loaded profile config "multinode-747000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:23:41.400721   14505 config.go:182] Loaded profile config "stopped-upgrade-782000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 16:23:41.400768   14505 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 16:23:41.404254   14505 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 16:23:41.411381   14505 start.go:297] selected driver: qemu2
	I1025 16:23:41.411388   14505 start.go:901] validating driver "qemu2" against <nil>
	I1025 16:23:41.411394   14505 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 16:23:41.413966   14505 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 16:23:41.415228   14505 out.go:177] * Automatically selected the socket_vmnet network
	I1025 16:23:41.418449   14505 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 16:23:41.418479   14505 cni.go:84] Creating CNI manager for ""
	I1025 16:23:41.418501   14505 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 16:23:41.418513   14505 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 16:23:41.418549   14505 start.go:340] cluster config:
	{Name:embed-certs-409000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-409000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:23:41.423211   14505 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:23:41.431330   14505 out.go:177] * Starting "embed-certs-409000" primary control-plane node in "embed-certs-409000" cluster
	I1025 16:23:41.435340   14505 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1025 16:23:41.435355   14505 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1025 16:23:41.435362   14505 cache.go:56] Caching tarball of preloaded images
	I1025 16:23:41.435425   14505 preload.go:172] Found /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 16:23:41.435430   14505 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1025 16:23:41.435477   14505 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/embed-certs-409000/config.json ...
	I1025 16:23:41.435488   14505 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/embed-certs-409000/config.json: {Name:mk8244e648f4da972b24563e213595b63d773c49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:23:41.435727   14505 start.go:360] acquireMachinesLock for embed-certs-409000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:23:41.435774   14505 start.go:364] duration metric: took 39.583µs to acquireMachinesLock for "embed-certs-409000"
	I1025 16:23:41.435785   14505 start.go:93] Provisioning new machine with config: &{Name:embed-certs-409000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-409000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:23:41.435850   14505 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:23:41.444345   14505 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 16:23:41.459868   14505 start.go:159] libmachine.API.Create for "embed-certs-409000" (driver="qemu2")
	I1025 16:23:41.459899   14505 client.go:168] LocalClient.Create starting
	I1025 16:23:41.459967   14505 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:23:41.460008   14505 main.go:141] libmachine: Decoding PEM data...
	I1025 16:23:41.460019   14505 main.go:141] libmachine: Parsing certificate...
	I1025 16:23:41.460055   14505 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:23:41.460088   14505 main.go:141] libmachine: Decoding PEM data...
	I1025 16:23:41.460100   14505 main.go:141] libmachine: Parsing certificate...
	I1025 16:23:41.460527   14505 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:23:41.624393   14505 main.go:141] libmachine: Creating SSH key...
	I1025 16:23:41.711616   14505 main.go:141] libmachine: Creating Disk image...
	I1025 16:23:41.711625   14505 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:23:41.711861   14505 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/embed-certs-409000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/embed-certs-409000/disk.qcow2
	I1025 16:23:41.721777   14505 main.go:141] libmachine: STDOUT: 
	I1025 16:23:41.721801   14505 main.go:141] libmachine: STDERR: 
	I1025 16:23:41.721864   14505 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/embed-certs-409000/disk.qcow2 +20000M
	I1025 16:23:41.731012   14505 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:23:41.731028   14505 main.go:141] libmachine: STDERR: 
	I1025 16:23:41.731052   14505 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/embed-certs-409000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/embed-certs-409000/disk.qcow2
	I1025 16:23:41.731056   14505 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:23:41.731067   14505 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:23:41.731105   14505 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/embed-certs-409000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/embed-certs-409000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/embed-certs-409000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:d4:eb:ea:04:65 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/embed-certs-409000/disk.qcow2
	I1025 16:23:41.733005   14505 main.go:141] libmachine: STDOUT: 
	I1025 16:23:41.733025   14505 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:23:41.733044   14505 client.go:171] duration metric: took 273.141208ms to LocalClient.Create
	I1025 16:23:43.735377   14505 start.go:128] duration metric: took 2.299502292s to createHost
	I1025 16:23:43.735585   14505 start.go:83] releasing machines lock for "embed-certs-409000", held for 2.2998165s
	W1025 16:23:43.735635   14505 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:23:43.753766   14505 out.go:177] * Deleting "embed-certs-409000" in qemu2 ...
	W1025 16:23:43.785366   14505 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:23:43.785392   14505 start.go:729] Will try again in 5 seconds ...
	I1025 16:23:48.787615   14505 start.go:360] acquireMachinesLock for embed-certs-409000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:23:48.788188   14505 start.go:364] duration metric: took 446µs to acquireMachinesLock for "embed-certs-409000"
	I1025 16:23:48.788335   14505 start.go:93] Provisioning new machine with config: &{Name:embed-certs-409000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-409000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:23:48.788660   14505 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:23:48.806195   14505 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 16:23:48.855031   14505 start.go:159] libmachine.API.Create for "embed-certs-409000" (driver="qemu2")
	I1025 16:23:48.855092   14505 client.go:168] LocalClient.Create starting
	I1025 16:23:48.855268   14505 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:23:48.855352   14505 main.go:141] libmachine: Decoding PEM data...
	I1025 16:23:48.855379   14505 main.go:141] libmachine: Parsing certificate...
	I1025 16:23:48.855468   14505 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:23:48.855528   14505 main.go:141] libmachine: Decoding PEM data...
	I1025 16:23:48.855547   14505 main.go:141] libmachine: Parsing certificate...
	I1025 16:23:48.856265   14505 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:23:49.145950   14505 main.go:141] libmachine: Creating SSH key...
	I1025 16:23:49.192312   14505 main.go:141] libmachine: Creating Disk image...
	I1025 16:23:49.192318   14505 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:23:49.192514   14505 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/embed-certs-409000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/embed-certs-409000/disk.qcow2
	I1025 16:23:49.202413   14505 main.go:141] libmachine: STDOUT: 
	I1025 16:23:49.202436   14505 main.go:141] libmachine: STDERR: 
	I1025 16:23:49.202492   14505 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/embed-certs-409000/disk.qcow2 +20000M
	I1025 16:23:49.211011   14505 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:23:49.211032   14505 main.go:141] libmachine: STDERR: 
	I1025 16:23:49.211043   14505 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/embed-certs-409000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/embed-certs-409000/disk.qcow2
	I1025 16:23:49.211049   14505 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:23:49.211056   14505 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:23:49.211092   14505 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/embed-certs-409000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/embed-certs-409000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/embed-certs-409000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:84:20:f0:93:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/embed-certs-409000/disk.qcow2
	I1025 16:23:49.212975   14505 main.go:141] libmachine: STDOUT: 
	I1025 16:23:49.212990   14505 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:23:49.213007   14505 client.go:171] duration metric: took 357.911583ms to LocalClient.Create
	I1025 16:23:51.215164   14505 start.go:128] duration metric: took 2.42649375s to createHost
	I1025 16:23:51.215270   14505 start.go:83] releasing machines lock for "embed-certs-409000", held for 2.427071583s
	W1025 16:23:51.215668   14505 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-409000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-409000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:23:51.229328   14505 out.go:201] 
	W1025 16:23:51.232469   14505 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:23:51.232515   14505 out.go:270] * 
	* 
	W1025 16:23:51.235453   14505 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 16:23:51.246299   14505 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-409000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-409000 -n embed-certs-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-409000 -n embed-certs-409000: exit status 7 (70.492042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-409000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-548000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-548000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (11.40277025s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-548000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-548000" primary control-plane node in "default-k8s-diff-port-548000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-548000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:23:42.807162   14527 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:23:42.807311   14527 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:23:42.807315   14527 out.go:358] Setting ErrFile to fd 2...
	I1025 16:23:42.807317   14527 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:23:42.807448   14527 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:23:42.808626   14527 out.go:352] Setting JSON to false
	I1025 16:23:42.826327   14527 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7860,"bootTime":1729890762,"procs":516,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 16:23:42.826401   14527 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 16:23:42.831143   14527 out.go:177] * [default-k8s-diff-port-548000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 16:23:42.837073   14527 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 16:23:42.837142   14527 notify.go:220] Checking for updates...
	I1025 16:23:42.844998   14527 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:23:42.848009   14527 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 16:23:42.851108   14527 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 16:23:42.854014   14527 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	I1025 16:23:42.857030   14527 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 16:23:42.860438   14527 config.go:182] Loaded profile config "embed-certs-409000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:23:42.860498   14527 config.go:182] Loaded profile config "multinode-747000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:23:42.860546   14527 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 16:23:42.863902   14527 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 16:23:42.871024   14527 start.go:297] selected driver: qemu2
	I1025 16:23:42.871031   14527 start.go:901] validating driver "qemu2" against <nil>
	I1025 16:23:42.871038   14527 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 16:23:42.873551   14527 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 16:23:42.874844   14527 out.go:177] * Automatically selected the socket_vmnet network
	I1025 16:23:42.878125   14527 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 16:23:42.878150   14527 cni.go:84] Creating CNI manager for ""
	I1025 16:23:42.878179   14527 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 16:23:42.878185   14527 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 16:23:42.878234   14527 start.go:340] cluster config:
	{Name:default-k8s-diff-port-548000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-548000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:23:42.882895   14527 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:23:42.891004   14527 out.go:177] * Starting "default-k8s-diff-port-548000" primary control-plane node in "default-k8s-diff-port-548000" cluster
	I1025 16:23:42.895053   14527 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1025 16:23:42.895073   14527 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1025 16:23:42.895083   14527 cache.go:56] Caching tarball of preloaded images
	I1025 16:23:42.895162   14527 preload.go:172] Found /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 16:23:42.895174   14527 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1025 16:23:42.895234   14527 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/default-k8s-diff-port-548000/config.json ...
	I1025 16:23:42.895249   14527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/default-k8s-diff-port-548000/config.json: {Name:mk827a6e4edf26c3f5a0b76e8b0c8ce597c8d4c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:23:42.895634   14527 start.go:360] acquireMachinesLock for default-k8s-diff-port-548000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:23:43.735722   14527 start.go:364] duration metric: took 840.065375ms to acquireMachinesLock for "default-k8s-diff-port-548000"
	I1025 16:23:43.735956   14527 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-548000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-548000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:23:43.736224   14527 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:23:43.745807   14527 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 16:23:43.794177   14527 start.go:159] libmachine.API.Create for "default-k8s-diff-port-548000" (driver="qemu2")
	I1025 16:23:43.794228   14527 client.go:168] LocalClient.Create starting
	I1025 16:23:43.794370   14527 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:23:43.794445   14527 main.go:141] libmachine: Decoding PEM data...
	I1025 16:23:43.794468   14527 main.go:141] libmachine: Parsing certificate...
	I1025 16:23:43.794545   14527 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:23:43.794606   14527 main.go:141] libmachine: Decoding PEM data...
	I1025 16:23:43.794618   14527 main.go:141] libmachine: Parsing certificate...
	I1025 16:23:43.795349   14527 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:23:44.278863   14527 main.go:141] libmachine: Creating SSH key...
	I1025 16:23:44.351360   14527 main.go:141] libmachine: Creating Disk image...
	I1025 16:23:44.351370   14527 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:23:44.351596   14527 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/default-k8s-diff-port-548000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/default-k8s-diff-port-548000/disk.qcow2
	I1025 16:23:44.370405   14527 main.go:141] libmachine: STDOUT: 
	I1025 16:23:44.370432   14527 main.go:141] libmachine: STDERR: 
	I1025 16:23:44.370498   14527 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/default-k8s-diff-port-548000/disk.qcow2 +20000M
	I1025 16:23:44.378994   14527 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:23:44.379016   14527 main.go:141] libmachine: STDERR: 
	I1025 16:23:44.379040   14527 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/default-k8s-diff-port-548000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/default-k8s-diff-port-548000/disk.qcow2
	I1025 16:23:44.379046   14527 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:23:44.379056   14527 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:23:44.379081   14527 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/default-k8s-diff-port-548000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/default-k8s-diff-port-548000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/default-k8s-diff-port-548000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:39:2d:60:44:f8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/default-k8s-diff-port-548000/disk.qcow2
	I1025 16:23:44.380847   14527 main.go:141] libmachine: STDOUT: 
	I1025 16:23:44.380862   14527 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:23:44.380886   14527 client.go:171] duration metric: took 586.654541ms to LocalClient.Create
	I1025 16:23:46.383048   14527 start.go:128] duration metric: took 2.64679925s to createHost
	I1025 16:23:46.383130   14527 start.go:83] releasing machines lock for "default-k8s-diff-port-548000", held for 2.647348542s
	W1025 16:23:46.383188   14527 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:23:46.402977   14527 out.go:177] * Deleting "default-k8s-diff-port-548000" in qemu2 ...
	W1025 16:23:46.430146   14527 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:23:46.430170   14527 start.go:729] Will try again in 5 seconds ...
	I1025 16:23:51.432200   14527 start.go:360] acquireMachinesLock for default-k8s-diff-port-548000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:23:51.432258   14527 start.go:364] duration metric: took 43.667µs to acquireMachinesLock for "default-k8s-diff-port-548000"
	I1025 16:23:51.432271   14527 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-548000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-548000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:23:51.432325   14527 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:23:51.438309   14527 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 16:23:51.453789   14527 start.go:159] libmachine.API.Create for "default-k8s-diff-port-548000" (driver="qemu2")
	I1025 16:23:51.453833   14527 client.go:168] LocalClient.Create starting
	I1025 16:23:51.453915   14527 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:23:51.453949   14527 main.go:141] libmachine: Decoding PEM data...
	I1025 16:23:51.453959   14527 main.go:141] libmachine: Parsing certificate...
	I1025 16:23:51.453998   14527 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:23:51.454015   14527 main.go:141] libmachine: Decoding PEM data...
	I1025 16:23:51.454020   14527 main.go:141] libmachine: Parsing certificate...
	I1025 16:23:51.454373   14527 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:23:51.950551   14527 main.go:141] libmachine: Creating SSH key...
	I1025 16:23:52.110930   14527 main.go:141] libmachine: Creating Disk image...
	I1025 16:23:52.110940   14527 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:23:52.111151   14527 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/default-k8s-diff-port-548000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/default-k8s-diff-port-548000/disk.qcow2
	I1025 16:23:52.121229   14527 main.go:141] libmachine: STDOUT: 
	I1025 16:23:52.121264   14527 main.go:141] libmachine: STDERR: 
	I1025 16:23:52.121335   14527 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/default-k8s-diff-port-548000/disk.qcow2 +20000M
	I1025 16:23:52.129899   14527 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:23:52.129915   14527 main.go:141] libmachine: STDERR: 
	I1025 16:23:52.129927   14527 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/default-k8s-diff-port-548000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/default-k8s-diff-port-548000/disk.qcow2
	I1025 16:23:52.129935   14527 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:23:52.129944   14527 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:23:52.129973   14527 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/default-k8s-diff-port-548000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/default-k8s-diff-port-548000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/default-k8s-diff-port-548000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:8d:9d:05:4a:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/default-k8s-diff-port-548000/disk.qcow2
	I1025 16:23:52.131769   14527 main.go:141] libmachine: STDOUT: 
	I1025 16:23:52.131805   14527 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:23:52.131819   14527 client.go:171] duration metric: took 677.987083ms to LocalClient.Create
	I1025 16:23:54.134116   14527 start.go:128] duration metric: took 2.701758458s to createHost
	I1025 16:23:54.134200   14527 start.go:83] releasing machines lock for "default-k8s-diff-port-548000", held for 2.7019495s
	W1025 16:23:54.134562   14527 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-548000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-548000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:23:54.144129   14527 out.go:201] 
	W1025 16:23:54.150311   14527 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:23:54.150359   14527 out.go:270] * 
	* 
	W1025 16:23:54.152953   14527 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 16:23:54.162213   14527 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-548000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-548000 -n default-k8s-diff-port-548000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-548000 -n default-k8s-diff-port-548000: exit status 7 (73.689584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-548000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-409000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-409000 create -f testdata/busybox.yaml: exit status 1 (28.747084ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-409000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-409000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-409000 -n embed-certs-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-409000 -n embed-certs-409000: exit status 7 (33.144917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-409000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-409000 -n embed-certs-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-409000 -n embed-certs-409000: exit status 7 (33.481709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-409000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-409000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-409000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-409000 describe deploy/metrics-server -n kube-system: exit status 1 (30.97525ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-409000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-409000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-409000 -n embed-certs-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-409000 -n embed-certs-409000: exit status 7 (35.8425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-409000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-548000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-548000 create -f testdata/busybox.yaml: exit status 1 (29.611ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-548000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-548000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-548000 -n default-k8s-diff-port-548000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-548000 -n default-k8s-diff-port-548000: exit status 7 (33.700333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-548000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-548000 -n default-k8s-diff-port-548000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-548000 -n default-k8s-diff-port-548000: exit status 7 (33.234834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-548000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-548000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-548000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-548000 describe deploy/metrics-server -n kube-system: exit status 1 (27.195ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-548000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-548000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-548000 -n default-k8s-diff-port-548000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-548000 -n default-k8s-diff-port-548000: exit status 7 (33.060333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-548000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-409000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-409000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.191730708s)

                                                
                                                
-- stdout --
	* [embed-certs-409000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-409000" primary control-plane node in "embed-certs-409000" cluster
	* Restarting existing qemu2 VM for "embed-certs-409000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-409000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:23:54.953866   14597 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:23:54.954034   14597 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:23:54.954037   14597 out.go:358] Setting ErrFile to fd 2...
	I1025 16:23:54.954039   14597 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:23:54.954159   14597 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:23:54.955194   14597 out.go:352] Setting JSON to false
	I1025 16:23:54.972949   14597 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7872,"bootTime":1729890762,"procs":511,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 16:23:54.973019   14597 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 16:23:54.977590   14597 out.go:177] * [embed-certs-409000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 16:23:54.985461   14597 notify.go:220] Checking for updates...
	I1025 16:23:54.989486   14597 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 16:23:54.992466   14597 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:23:54.995428   14597 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 16:23:54.999465   14597 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 16:23:55.002398   14597 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	I1025 16:23:55.005455   14597 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 16:23:55.008795   14597 config.go:182] Loaded profile config "embed-certs-409000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:23:55.009081   14597 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 16:23:55.012387   14597 out.go:177] * Using the qemu2 driver based on existing profile
	I1025 16:23:55.019475   14597 start.go:297] selected driver: qemu2
	I1025 16:23:55.019483   14597 start.go:901] validating driver "qemu2" against &{Name:embed-certs-409000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:embed-certs-409000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:23:55.019544   14597 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 16:23:55.022156   14597 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 16:23:55.022183   14597 cni.go:84] Creating CNI manager for ""
	I1025 16:23:55.022204   14597 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 16:23:55.022231   14597 start.go:340] cluster config:
	{Name:embed-certs-409000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-409000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:23:55.026813   14597 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:23:55.034481   14597 out.go:177] * Starting "embed-certs-409000" primary control-plane node in "embed-certs-409000" cluster
	I1025 16:23:55.038464   14597 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1025 16:23:55.038480   14597 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1025 16:23:55.038490   14597 cache.go:56] Caching tarball of preloaded images
	I1025 16:23:55.038565   14597 preload.go:172] Found /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 16:23:55.038578   14597 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1025 16:23:55.038632   14597 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/embed-certs-409000/config.json ...
	I1025 16:23:55.039092   14597 start.go:360] acquireMachinesLock for embed-certs-409000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:23:55.039122   14597 start.go:364] duration metric: took 24.917µs to acquireMachinesLock for "embed-certs-409000"
	I1025 16:23:55.039131   14597 start.go:96] Skipping create...Using existing machine configuration
	I1025 16:23:55.039136   14597 fix.go:54] fixHost starting: 
	I1025 16:23:55.039250   14597 fix.go:112] recreateIfNeeded on embed-certs-409000: state=Stopped err=<nil>
	W1025 16:23:55.039258   14597 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 16:23:55.047461   14597 out.go:177] * Restarting existing qemu2 VM for "embed-certs-409000" ...
	I1025 16:23:55.051409   14597 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:23:55.051451   14597 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/embed-certs-409000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/embed-certs-409000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/embed-certs-409000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:84:20:f0:93:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/embed-certs-409000/disk.qcow2
	I1025 16:23:55.053536   14597 main.go:141] libmachine: STDOUT: 
	I1025 16:23:55.053553   14597 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:23:55.053580   14597 fix.go:56] duration metric: took 14.443167ms for fixHost
	I1025 16:23:55.053586   14597 start.go:83] releasing machines lock for "embed-certs-409000", held for 14.459125ms
	W1025 16:23:55.053591   14597 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:23:55.053629   14597 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:23:55.053633   14597 start.go:729] Will try again in 5 seconds ...
	I1025 16:24:00.055888   14597 start.go:360] acquireMachinesLock for embed-certs-409000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:24:00.056303   14597 start.go:364] duration metric: took 314.25µs to acquireMachinesLock for "embed-certs-409000"
	I1025 16:24:00.056477   14597 start.go:96] Skipping create...Using existing machine configuration
	I1025 16:24:00.056497   14597 fix.go:54] fixHost starting: 
	I1025 16:24:00.057234   14597 fix.go:112] recreateIfNeeded on embed-certs-409000: state=Stopped err=<nil>
	W1025 16:24:00.057259   14597 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 16:24:00.062009   14597 out.go:177] * Restarting existing qemu2 VM for "embed-certs-409000" ...
	I1025 16:24:00.068757   14597 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:24:00.069044   14597 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/embed-certs-409000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/embed-certs-409000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/embed-certs-409000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:84:20:f0:93:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/embed-certs-409000/disk.qcow2
	I1025 16:24:00.078821   14597 main.go:141] libmachine: STDOUT: 
	I1025 16:24:00.078876   14597 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:24:00.078946   14597 fix.go:56] duration metric: took 22.4505ms for fixHost
	I1025 16:24:00.078968   14597 start.go:83] releasing machines lock for "embed-certs-409000", held for 22.642667ms
	W1025 16:24:00.079199   14597 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-409000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-409000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:24:00.085731   14597 out.go:201] 
	W1025 16:24:00.089813   14597 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:24:00.089843   14597 out.go:270] * 
	* 
	W1025 16:24:00.092338   14597 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 16:24:00.099840   14597 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-409000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-409000 -n embed-certs-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-409000 -n embed-certs-409000: exit status 7 (69.6385ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-409000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-548000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-548000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.406064167s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-548000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-548000" primary control-plane node in "default-k8s-diff-port-548000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-548000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-548000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:23:57.766760   14620 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:23:57.766923   14620 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:23:57.766926   14620 out.go:358] Setting ErrFile to fd 2...
	I1025 16:23:57.766928   14620 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:23:57.767041   14620 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:23:57.768131   14620 out.go:352] Setting JSON to false
	I1025 16:23:57.785846   14620 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7875,"bootTime":1729890762,"procs":511,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 16:23:57.785918   14620 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 16:23:57.791185   14620 out.go:177] * [default-k8s-diff-port-548000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 16:23:57.797102   14620 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 16:23:57.797152   14620 notify.go:220] Checking for updates...
	I1025 16:23:57.804984   14620 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:23:57.808048   14620 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 16:23:57.811082   14620 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 16:23:57.814048   14620 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	I1025 16:23:57.817093   14620 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 16:23:57.820439   14620 config.go:182] Loaded profile config "default-k8s-diff-port-548000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:23:57.820739   14620 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 16:23:57.824010   14620 out.go:177] * Using the qemu2 driver based on existing profile
	I1025 16:23:57.831105   14620 start.go:297] selected driver: qemu2
	I1025 16:23:57.831113   14620 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-548000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-548000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:23:57.831176   14620 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 16:23:57.833796   14620 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 16:23:57.833826   14620 cni.go:84] Creating CNI manager for ""
	I1025 16:23:57.833854   14620 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 16:23:57.833878   14620 start.go:340] cluster config:
	{Name:default-k8s-diff-port-548000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-548000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:23:57.838423   14620 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:23:57.846071   14620 out.go:177] * Starting "default-k8s-diff-port-548000" primary control-plane node in "default-k8s-diff-port-548000" cluster
	I1025 16:23:57.849049   14620 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1025 16:23:57.849062   14620 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1025 16:23:57.849070   14620 cache.go:56] Caching tarball of preloaded images
	I1025 16:23:57.849120   14620 preload.go:172] Found /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 16:23:57.849125   14620 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1025 16:23:57.849176   14620 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/default-k8s-diff-port-548000/config.json ...
	I1025 16:23:57.849608   14620 start.go:360] acquireMachinesLock for default-k8s-diff-port-548000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:23:57.849638   14620 start.go:364] duration metric: took 24.625µs to acquireMachinesLock for "default-k8s-diff-port-548000"
	I1025 16:23:57.849647   14620 start.go:96] Skipping create...Using existing machine configuration
	I1025 16:23:57.849652   14620 fix.go:54] fixHost starting: 
	I1025 16:23:57.849765   14620 fix.go:112] recreateIfNeeded on default-k8s-diff-port-548000: state=Stopped err=<nil>
	W1025 16:23:57.849773   14620 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 16:23:57.854122   14620 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-548000" ...
	I1025 16:23:57.861068   14620 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:23:57.861105   14620 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/default-k8s-diff-port-548000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/default-k8s-diff-port-548000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/default-k8s-diff-port-548000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:8d:9d:05:4a:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/default-k8s-diff-port-548000/disk.qcow2
	I1025 16:23:57.863341   14620 main.go:141] libmachine: STDOUT: 
	I1025 16:23:57.863359   14620 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:23:57.863395   14620 fix.go:56] duration metric: took 13.742959ms for fixHost
	I1025 16:23:57.863399   14620 start.go:83] releasing machines lock for "default-k8s-diff-port-548000", held for 13.7565ms
	W1025 16:23:57.863405   14620 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:23:57.863448   14620 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:23:57.863452   14620 start.go:729] Will try again in 5 seconds ...
	I1025 16:24:02.865663   14620 start.go:360] acquireMachinesLock for default-k8s-diff-port-548000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:24:03.058310   14620 start.go:364] duration metric: took 192.526ms to acquireMachinesLock for "default-k8s-diff-port-548000"
	I1025 16:24:03.058405   14620 start.go:96] Skipping create...Using existing machine configuration
	I1025 16:24:03.058429   14620 fix.go:54] fixHost starting: 
	I1025 16:24:03.059275   14620 fix.go:112] recreateIfNeeded on default-k8s-diff-port-548000: state=Stopped err=<nil>
	W1025 16:24:03.059303   14620 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 16:24:03.069717   14620 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-548000" ...
	I1025 16:24:03.082742   14620 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:24:03.083009   14620 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/default-k8s-diff-port-548000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/default-k8s-diff-port-548000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/default-k8s-diff-port-548000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:8d:9d:05:4a:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/default-k8s-diff-port-548000/disk.qcow2
	I1025 16:24:03.094510   14620 main.go:141] libmachine: STDOUT: 
	I1025 16:24:03.094569   14620 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:24:03.094646   14620 fix.go:56] duration metric: took 36.217167ms for fixHost
	I1025 16:24:03.094663   14620 start.go:83] releasing machines lock for "default-k8s-diff-port-548000", held for 36.329792ms
	W1025 16:24:03.094868   14620 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-548000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-548000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:24:03.103728   14620 out.go:201] 
	W1025 16:24:03.108805   14620 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:24:03.108828   14620 out.go:270] * 
	* 
	W1025 16:24:03.110651   14620 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 16:24:03.123735   14620 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-548000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-548000 -n default-k8s-diff-port-548000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-548000 -n default-k8s-diff-port-548000: exit status 7 (70.498584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-548000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-409000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-409000 -n embed-certs-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-409000 -n embed-certs-409000: exit status 7 (35.268792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-409000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-409000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-409000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-409000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.405375ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-409000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-409000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-409000 -n embed-certs-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-409000 -n embed-certs-409000: exit status 7 (33.443375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-409000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-409000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-409000 -n embed-certs-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-409000 -n embed-certs-409000: exit status 7 (33.605584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-409000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-409000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-409000 --alsologtostderr -v=1: exit status 83 (44.751791ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-409000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-409000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:24:00.390343   14639 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:24:00.390529   14639 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:24:00.390532   14639 out.go:358] Setting ErrFile to fd 2...
	I1025 16:24:00.390534   14639 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:24:00.390653   14639 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:24:00.390895   14639 out.go:352] Setting JSON to false
	I1025 16:24:00.390902   14639 mustload.go:65] Loading cluster: embed-certs-409000
	I1025 16:24:00.391127   14639 config.go:182] Loaded profile config "embed-certs-409000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:24:00.395921   14639 out.go:177] * The control-plane node embed-certs-409000 host is not running: state=Stopped
	I1025 16:24:00.398953   14639 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-409000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-409000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-409000 -n embed-certs-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-409000 -n embed-certs-409000: exit status 7 (32.590375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-409000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-409000 -n embed-certs-409000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-409000 -n embed-certs-409000: exit status 7 (32.109917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-409000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-262000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-262000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.813793042s)

                                                
                                                
-- stdout --
	* [newest-cni-262000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-262000" primary control-plane node in "newest-cni-262000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-262000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:24:00.729319   14656 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:24:00.729479   14656 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:24:00.729482   14656 out.go:358] Setting ErrFile to fd 2...
	I1025 16:24:00.729485   14656 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:24:00.729615   14656 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:24:00.730687   14656 out.go:352] Setting JSON to false
	I1025 16:24:00.748733   14656 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7878,"bootTime":1729890762,"procs":511,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 16:24:00.748803   14656 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 16:24:00.753334   14656 out.go:177] * [newest-cni-262000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 16:24:00.759296   14656 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 16:24:00.759353   14656 notify.go:220] Checking for updates...
	I1025 16:24:00.766312   14656 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:24:00.769294   14656 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 16:24:00.772270   14656 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 16:24:00.775354   14656 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	I1025 16:24:00.778279   14656 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 16:24:00.781633   14656 config.go:182] Loaded profile config "default-k8s-diff-port-548000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:24:00.781701   14656 config.go:182] Loaded profile config "multinode-747000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:24:00.781748   14656 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 16:24:00.786290   14656 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 16:24:00.793275   14656 start.go:297] selected driver: qemu2
	I1025 16:24:00.793282   14656 start.go:901] validating driver "qemu2" against <nil>
	I1025 16:24:00.793290   14656 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 16:24:00.795776   14656 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1025 16:24:00.795816   14656 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1025 16:24:00.800300   14656 out.go:177] * Automatically selected the socket_vmnet network
	I1025 16:24:00.803351   14656 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 16:24:00.803368   14656 cni.go:84] Creating CNI manager for ""
	I1025 16:24:00.803388   14656 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 16:24:00.803392   14656 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 16:24:00.803424   14656 start.go:340] cluster config:
	{Name:newest-cni-262000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:24:00.808232   14656 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:24:00.816294   14656 out.go:177] * Starting "newest-cni-262000" primary control-plane node in "newest-cni-262000" cluster
	I1025 16:24:00.820301   14656 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1025 16:24:00.820320   14656 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1025 16:24:00.820330   14656 cache.go:56] Caching tarball of preloaded images
	I1025 16:24:00.820416   14656 preload.go:172] Found /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 16:24:00.820423   14656 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1025 16:24:00.820494   14656 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/newest-cni-262000/config.json ...
	I1025 16:24:00.820506   14656 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/newest-cni-262000/config.json: {Name:mk5fbc2f188488b35732e27c086e6b3ee02ec8e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 16:24:00.820916   14656 start.go:360] acquireMachinesLock for newest-cni-262000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:24:00.820971   14656 start.go:364] duration metric: took 48.625µs to acquireMachinesLock for "newest-cni-262000"
	I1025 16:24:00.820983   14656 start.go:93] Provisioning new machine with config: &{Name:newest-cni-262000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:24:00.821015   14656 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:24:00.828251   14656 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 16:24:00.846275   14656 start.go:159] libmachine.API.Create for "newest-cni-262000" (driver="qemu2")
	I1025 16:24:00.846304   14656 client.go:168] LocalClient.Create starting
	I1025 16:24:00.846379   14656 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:24:00.846417   14656 main.go:141] libmachine: Decoding PEM data...
	I1025 16:24:00.846428   14656 main.go:141] libmachine: Parsing certificate...
	I1025 16:24:00.846466   14656 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:24:00.846497   14656 main.go:141] libmachine: Decoding PEM data...
	I1025 16:24:00.846507   14656 main.go:141] libmachine: Parsing certificate...
	I1025 16:24:00.846961   14656 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:24:01.004136   14656 main.go:141] libmachine: Creating SSH key...
	I1025 16:24:01.035407   14656 main.go:141] libmachine: Creating Disk image...
	I1025 16:24:01.035416   14656 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:24:01.035622   14656 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/newest-cni-262000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/newest-cni-262000/disk.qcow2
	I1025 16:24:01.045461   14656 main.go:141] libmachine: STDOUT: 
	I1025 16:24:01.045482   14656 main.go:141] libmachine: STDERR: 
	I1025 16:24:01.045527   14656 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/newest-cni-262000/disk.qcow2 +20000M
	I1025 16:24:01.054133   14656 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:24:01.054148   14656 main.go:141] libmachine: STDERR: 
	I1025 16:24:01.054167   14656 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/newest-cni-262000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/newest-cni-262000/disk.qcow2
	I1025 16:24:01.054173   14656 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:24:01.054185   14656 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:24:01.054219   14656 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/newest-cni-262000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/newest-cni-262000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/newest-cni-262000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:b2:4a:cb:ca:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/newest-cni-262000/disk.qcow2
	I1025 16:24:01.055955   14656 main.go:141] libmachine: STDOUT: 
	I1025 16:24:01.055968   14656 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:24:01.055986   14656 client.go:171] duration metric: took 209.677417ms to LocalClient.Create
	I1025 16:24:03.058142   14656 start.go:128] duration metric: took 2.2371215s to createHost
	I1025 16:24:03.058182   14656 start.go:83] releasing machines lock for "newest-cni-262000", held for 2.237216834s
	W1025 16:24:03.058235   14656 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:24:03.078805   14656 out.go:177] * Deleting "newest-cni-262000" in qemu2 ...
	W1025 16:24:03.136992   14656 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:24:03.137034   14656 start.go:729] Will try again in 5 seconds ...
	I1025 16:24:08.139280   14656 start.go:360] acquireMachinesLock for newest-cni-262000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:24:08.140265   14656 start.go:364] duration metric: took 767.084µs to acquireMachinesLock for "newest-cni-262000"
	I1025 16:24:08.140453   14656 start.go:93] Provisioning new machine with config: &{Name:newest-cni-262000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 16:24:08.140917   14656 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 16:24:08.145807   14656 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 16:24:08.195419   14656 start.go:159] libmachine.API.Create for "newest-cni-262000" (driver="qemu2")
	I1025 16:24:08.195486   14656 client.go:168] LocalClient.Create starting
	I1025 16:24:08.195640   14656 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/ca.pem
	I1025 16:24:08.195723   14656 main.go:141] libmachine: Decoding PEM data...
	I1025 16:24:08.195751   14656 main.go:141] libmachine: Parsing certificate...
	I1025 16:24:08.195827   14656 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19758-10490/.minikube/certs/cert.pem
	I1025 16:24:08.195884   14656 main.go:141] libmachine: Decoding PEM data...
	I1025 16:24:08.195898   14656 main.go:141] libmachine: Parsing certificate...
	I1025 16:24:08.196590   14656 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 16:24:08.365828   14656 main.go:141] libmachine: Creating SSH key...
	I1025 16:24:08.442737   14656 main.go:141] libmachine: Creating Disk image...
	I1025 16:24:08.442743   14656 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 16:24:08.442958   14656 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/newest-cni-262000/disk.qcow2.raw /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/newest-cni-262000/disk.qcow2
	I1025 16:24:08.452822   14656 main.go:141] libmachine: STDOUT: 
	I1025 16:24:08.452846   14656 main.go:141] libmachine: STDERR: 
	I1025 16:24:08.452912   14656 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/newest-cni-262000/disk.qcow2 +20000M
	I1025 16:24:08.461435   14656 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 16:24:08.461452   14656 main.go:141] libmachine: STDERR: 
	I1025 16:24:08.461463   14656 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/newest-cni-262000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/newest-cni-262000/disk.qcow2
	I1025 16:24:08.461467   14656 main.go:141] libmachine: Starting QEMU VM...
	I1025 16:24:08.461482   14656 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:24:08.461509   14656 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/newest-cni-262000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/newest-cni-262000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/newest-cni-262000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:97:f6:54:d7:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/newest-cni-262000/disk.qcow2
	I1025 16:24:08.463383   14656 main.go:141] libmachine: STDOUT: 
	I1025 16:24:08.463402   14656 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:24:08.463421   14656 client.go:171] duration metric: took 267.931ms to LocalClient.Create
	I1025 16:24:10.465642   14656 start.go:128] duration metric: took 2.324652667s to createHost
	I1025 16:24:10.465695   14656 start.go:83] releasing machines lock for "newest-cni-262000", held for 2.325390833s
	W1025 16:24:10.466121   14656 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-262000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-262000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:24:10.480696   14656 out.go:201] 
	W1025 16:24:10.484851   14656 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:24:10.484877   14656 out.go:270] * 
	* 
	W1025 16:24:10.487963   14656 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 16:24:10.498588   14656 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-262000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-262000 -n newest-cni-262000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-262000 -n newest-cni-262000: exit status 7 (72.545458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-262000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-548000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-548000 -n default-k8s-diff-port-548000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-548000 -n default-k8s-diff-port-548000: exit status 7 (35.015167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-548000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-548000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-548000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-548000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.108ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-548000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-548000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-548000 -n default-k8s-diff-port-548000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-548000 -n default-k8s-diff-port-548000: exit status 7 (32.930584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-548000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-548000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-548000 -n default-k8s-diff-port-548000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-548000 -n default-k8s-diff-port-548000: exit status 7 (32.604875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-548000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-548000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-548000 --alsologtostderr -v=1: exit status 83 (44.9425ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-548000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-548000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:24:03.415871   14678 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:24:03.416072   14678 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:24:03.416075   14678 out.go:358] Setting ErrFile to fd 2...
	I1025 16:24:03.416078   14678 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:24:03.416198   14678 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:24:03.416415   14678 out.go:352] Setting JSON to false
	I1025 16:24:03.416423   14678 mustload.go:65] Loading cluster: default-k8s-diff-port-548000
	I1025 16:24:03.416648   14678 config.go:182] Loaded profile config "default-k8s-diff-port-548000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:24:03.421218   14678 out.go:177] * The control-plane node default-k8s-diff-port-548000 host is not running: state=Stopped
	I1025 16:24:03.425289   14678 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-548000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-548000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-548000 -n default-k8s-diff-port-548000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-548000 -n default-k8s-diff-port-548000: exit status 7 (32.676ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-548000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-548000 -n default-k8s-diff-port-548000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-548000 -n default-k8s-diff-port-548000: exit status 7 (33.734625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-548000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-262000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-262000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.197623542s)

                                                
                                                
-- stdout --
	* [newest-cni-262000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-262000" primary control-plane node in "newest-cni-262000" cluster
	* Restarting existing qemu2 VM for "newest-cni-262000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-262000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:24:14.094868   14725 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:24:14.095037   14725 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:24:14.095040   14725 out.go:358] Setting ErrFile to fd 2...
	I1025 16:24:14.095042   14725 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:24:14.095165   14725 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:24:14.096159   14725 out.go:352] Setting JSON to false
	I1025 16:24:14.113674   14725 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7892,"bootTime":1729890762,"procs":510,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 16:24:14.113741   14725 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 16:24:14.118523   14725 out.go:177] * [newest-cni-262000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 16:24:14.130540   14725 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 16:24:14.130609   14725 notify.go:220] Checking for updates...
	I1025 16:24:14.138494   14725 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 16:24:14.139932   14725 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 16:24:14.143501   14725 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 16:24:14.146519   14725 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	I1025 16:24:14.149536   14725 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 16:24:14.152824   14725 config.go:182] Loaded profile config "newest-cni-262000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:24:14.153103   14725 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 16:24:14.157470   14725 out.go:177] * Using the qemu2 driver based on existing profile
	I1025 16:24:14.164488   14725 start.go:297] selected driver: qemu2
	I1025 16:24:14.164495   14725 start.go:901] validating driver "qemu2" against &{Name:newest-cni-262000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:newest-cni-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:24:14.164549   14725 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 16:24:14.167173   14725 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 16:24:14.167196   14725 cni.go:84] Creating CNI manager for ""
	I1025 16:24:14.167221   14725 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 16:24:14.167248   14725 start.go:340] cluster config:
	{Name:newest-cni-262000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-262000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 16:24:14.171814   14725 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 16:24:14.180435   14725 out.go:177] * Starting "newest-cni-262000" primary control-plane node in "newest-cni-262000" cluster
	I1025 16:24:14.183596   14725 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1025 16:24:14.183615   14725 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1025 16:24:14.183622   14725 cache.go:56] Caching tarball of preloaded images
	I1025 16:24:14.183696   14725 preload.go:172] Found /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 16:24:14.183702   14725 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1025 16:24:14.183761   14725 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/newest-cni-262000/config.json ...
	I1025 16:24:14.184270   14725 start.go:360] acquireMachinesLock for newest-cni-262000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:24:14.184303   14725 start.go:364] duration metric: took 26.292µs to acquireMachinesLock for "newest-cni-262000"
	I1025 16:24:14.184312   14725 start.go:96] Skipping create...Using existing machine configuration
	I1025 16:24:14.184317   14725 fix.go:54] fixHost starting: 
	I1025 16:24:14.184439   14725 fix.go:112] recreateIfNeeded on newest-cni-262000: state=Stopped err=<nil>
	W1025 16:24:14.184446   14725 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 16:24:14.188470   14725 out.go:177] * Restarting existing qemu2 VM for "newest-cni-262000" ...
	I1025 16:24:14.196478   14725 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:24:14.196515   14725 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/newest-cni-262000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/newest-cni-262000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/newest-cni-262000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:97:f6:54:d7:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/newest-cni-262000/disk.qcow2
	I1025 16:24:14.198799   14725 main.go:141] libmachine: STDOUT: 
	I1025 16:24:14.198821   14725 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:24:14.198851   14725 fix.go:56] duration metric: took 14.53325ms for fixHost
	I1025 16:24:14.198856   14725 start.go:83] releasing machines lock for "newest-cni-262000", held for 14.548333ms
	W1025 16:24:14.198863   14725 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:24:14.198904   14725 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:24:14.198909   14725 start.go:729] Will try again in 5 seconds ...
	I1025 16:24:19.201084   14725 start.go:360] acquireMachinesLock for newest-cni-262000: {Name:mk382abcaedbc538c9a7f9940a4d92e97c0407e3 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 16:24:19.201492   14725 start.go:364] duration metric: took 326.791µs to acquireMachinesLock for "newest-cni-262000"
	I1025 16:24:19.201626   14725 start.go:96] Skipping create...Using existing machine configuration
	I1025 16:24:19.201650   14725 fix.go:54] fixHost starting: 
	I1025 16:24:19.202346   14725 fix.go:112] recreateIfNeeded on newest-cni-262000: state=Stopped err=<nil>
	W1025 16:24:19.202371   14725 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 16:24:19.210688   14725 out.go:177] * Restarting existing qemu2 VM for "newest-cni-262000" ...
	I1025 16:24:19.213730   14725 qemu.go:418] Using hvf for hardware acceleration
	I1025 16:24:19.214097   14725 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/newest-cni-262000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19758-10490/.minikube/machines/newest-cni-262000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/newest-cni-262000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:97:f6:54:d7:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19758-10490/.minikube/machines/newest-cni-262000/disk.qcow2
	I1025 16:24:19.223882   14725 main.go:141] libmachine: STDOUT: 
	I1025 16:24:19.223975   14725 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 16:24:19.224067   14725 fix.go:56] duration metric: took 22.42075ms for fixHost
	I1025 16:24:19.224090   14725 start.go:83] releasing machines lock for "newest-cni-262000", held for 22.570042ms
	W1025 16:24:19.224306   14725 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-262000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-262000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 16:24:19.232706   14725 out.go:201] 
	W1025 16:24:19.236836   14725 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 16:24:19.236861   14725 out.go:270] * 
	* 
	W1025 16:24:19.239447   14725 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 16:24:19.246721   14725 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-262000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-262000 -n newest-cni-262000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-262000 -n newest-cni-262000: exit status 7 (74.008584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-262000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-262000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-262000 -n newest-cni-262000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-262000 -n newest-cni-262000: exit status 7 (34.084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-262000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-262000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-262000 --alsologtostderr -v=1: exit status 83 (46.4155ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-262000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-262000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 16:24:19.450906   14739 out.go:345] Setting OutFile to fd 1 ...
	I1025 16:24:19.451116   14739 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:24:19.451119   14739 out.go:358] Setting ErrFile to fd 2...
	I1025 16:24:19.451122   14739 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 16:24:19.451265   14739 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 16:24:19.451489   14739 out.go:352] Setting JSON to false
	I1025 16:24:19.451496   14739 mustload.go:65] Loading cluster: newest-cni-262000
	I1025 16:24:19.451720   14739 config.go:182] Loaded profile config "newest-cni-262000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 16:24:19.456061   14739 out.go:177] * The control-plane node newest-cni-262000 host is not running: state=Stopped
	I1025 16:24:19.459916   14739 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-262000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-262000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-262000 -n newest-cni-262000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-262000 -n newest-cni-262000: exit status 7 (34.411959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-262000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-262000 -n newest-cni-262000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-262000 -n newest-cni-262000: exit status 7 (34.2475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-262000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.12s)

                                                
                                    

Test pass (80/258)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.11
12 TestDownloadOnly/v1.31.1/json-events 7.47
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.12
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.11
21 TestBinaryMirror 0.3
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
35 TestHyperKitDriverInstallOrUpdate 10.83
39 TestErrorSpam/start 0.4
40 TestErrorSpam/status 0.1
41 TestErrorSpam/pause 0.13
42 TestErrorSpam/unpause 0.13
43 TestErrorSpam/stop 9.17
46 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/CacheCmd/cache/add_remote 1.97
55 TestFunctional/serial/CacheCmd/cache/add_local 1.02
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
57 TestFunctional/serial/CacheCmd/cache/list 0.04
60 TestFunctional/serial/CacheCmd/cache/delete 0.08
69 TestFunctional/parallel/ConfigCmd 0.25
71 TestFunctional/parallel/DryRun 0.28
72 TestFunctional/parallel/InternationalLanguage 0.12
78 TestFunctional/parallel/AddonsCmd 0.11
93 TestFunctional/parallel/License 0.24
94 TestFunctional/parallel/Version/short 0.04
101 TestFunctional/parallel/ImageCommands/Setup 1.75
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
122 TestFunctional/parallel/ImageCommands/ImageRemove 0.08
124 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.08
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.1
126 TestFunctional/parallel/ProfileCmd/profile_list 0.09
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.09
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.17
135 TestFunctional/delete_echo-server_images 0.07
136 TestFunctional/delete_my-image_image 0.02
137 TestFunctional/delete_minikube_cached_images 0.02
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 1.82
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.21
193 TestMainNoArgs 0.04
240 TestStoppedBinaryUpgrade/Setup 1.13
252 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
256 TestNoKubernetes/serial/VerifyK8sNotRunning 0.05
257 TestNoKubernetes/serial/ProfileList 31.32
258 TestNoKubernetes/serial/Stop 2.09
260 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.05
272 TestStoppedBinaryUpgrade/MinikubeLogs 0.67
275 TestStartStop/group/old-k8s-version/serial/Stop 2.03
276 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.11
286 TestStartStop/group/no-preload/serial/Stop 3
287 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
299 TestStartStop/group/embed-certs/serial/Stop 3.21
302 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.13
303 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
305 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
317 TestStartStop/group/newest-cni/serial/DeployApp 0
318 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
319 TestStartStop/group/newest-cni/serial/Stop 3.27
320 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.14
322 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1025 15:58:11.598884   10998 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I1025 15:58:11.599267   10998 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-826000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-826000: exit status 85 (99.143125ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-826000 | jenkins | v1.34.0 | 25 Oct 24 15:57 PDT |          |
	|         | -p download-only-826000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/25 15:57:56
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 15:57:56.806003   10999 out.go:345] Setting OutFile to fd 1 ...
	I1025 15:57:56.806162   10999 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 15:57:56.806166   10999 out.go:358] Setting ErrFile to fd 2...
	I1025 15:57:56.806168   10999 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 15:57:56.806313   10999 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	W1025 15:57:56.806408   10999 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19758-10490/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19758-10490/.minikube/config/config.json: no such file or directory
	I1025 15:57:56.807771   10999 out.go:352] Setting JSON to true
	I1025 15:57:56.825797   10999 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6314,"bootTime":1729890762,"procs":514,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 15:57:56.825872   10999 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 15:57:56.831626   10999 out.go:97] [download-only-826000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 15:57:56.831745   10999 notify.go:220] Checking for updates...
	W1025 15:57:56.831814   10999 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball: no such file or directory
	I1025 15:57:56.835700   10999 out.go:169] MINIKUBE_LOCATION=19758
	I1025 15:57:56.838855   10999 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 15:57:56.843689   10999 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 15:57:56.846691   10999 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 15:57:56.849701   10999 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	W1025 15:57:56.855692   10999 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 15:57:56.855941   10999 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 15:57:56.859619   10999 out.go:97] Using the qemu2 driver based on user configuration
	I1025 15:57:56.859640   10999 start.go:297] selected driver: qemu2
	I1025 15:57:56.859662   10999 start.go:901] validating driver "qemu2" against <nil>
	I1025 15:57:56.859748   10999 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 15:57:56.862671   10999 out.go:169] Automatically selected the socket_vmnet network
	I1025 15:57:56.868193   10999 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1025 15:57:56.868297   10999 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 15:57:56.868354   10999 cni.go:84] Creating CNI manager for ""
	I1025 15:57:56.868397   10999 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1025 15:57:56.868445   10999 start.go:340] cluster config:
	{Name:download-only-826000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-826000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 15:57:56.873194   10999 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 15:57:56.876740   10999 out.go:97] Downloading VM boot image ...
	I1025 15:57:56.876752   10999 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso
	I1025 15:58:02.996092   10999 out.go:97] Starting "download-only-826000" primary control-plane node in "download-only-826000" cluster
	I1025 15:58:02.996141   10999 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1025 15:58:03.057085   10999 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1025 15:58:03.057108   10999 cache.go:56] Caching tarball of preloaded images
	I1025 15:58:03.057313   10999 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1025 15:58:03.061406   10999 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1025 15:58:03.061412   10999 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1025 15:58:03.148592   10999 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1025 15:58:10.288107   10999 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1025 15:58:10.288296   10999 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1025 15:58:10.982400   10999 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1025 15:58:10.982634   10999 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/download-only-826000/config.json ...
	I1025 15:58:10.982653   10999 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19758-10490/.minikube/profiles/download-only-826000/config.json: {Name:mke9af12784eec6b05a832561d51659fb6697777 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 15:58:10.982909   10999 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1025 15:58:10.983178   10999 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1025 15:58:11.549360   10999 out.go:193] 
	W1025 15:58:11.554496   10999 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19758-10490/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109bfd320 0x109bfd320 0x109bfd320 0x109bfd320 0x109bfd320 0x109bfd320 0x109bfd320] Decompressors:map[bz2:0x140001267b0 gz:0x140001267b8 tar:0x14000126710 tar.bz2:0x14000126720 tar.gz:0x14000126730 tar.xz:0x14000126740 tar.zst:0x14000126780 tbz2:0x14000126720 tgz:0x14000126730 txz:0x14000126740 tzst:0x14000126780 xz:0x14000126a00 zip:0x14000126a10 zst:0x14000126a08] Getters:map[file:0x140007147a0 http:0x1400047e500 https:0x1400047e550] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1025 15:58:11.554523   10999 out_reason.go:110] 
	W1025 15:58:11.563414   10999 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 15:58:11.566342   10999 out.go:193] 
	
	
	* The control-plane node download-only-826000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-826000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-826000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (7.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-831000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-831000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 : (7.473280833s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (7.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1025 15:58:19.446360   10998 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I1025 15:58:19.446410   10998 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-831000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-831000: exit status 85 (84.082ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-826000 | jenkins | v1.34.0 | 25 Oct 24 15:57 PDT |                     |
	|         | -p download-only-826000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT | 25 Oct 24 15:58 PDT |
	| delete  | -p download-only-826000        | download-only-826000 | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT | 25 Oct 24 15:58 PDT |
	| start   | -o=json --download-only        | download-only-831000 | jenkins | v1.34.0 | 25 Oct 24 15:58 PDT |                     |
	|         | -p download-only-831000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/25 15:58:12
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 15:58:12.005019   11027 out.go:345] Setting OutFile to fd 1 ...
	I1025 15:58:12.005171   11027 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 15:58:12.005176   11027 out.go:358] Setting ErrFile to fd 2...
	I1025 15:58:12.005179   11027 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 15:58:12.005296   11027 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 15:58:12.006370   11027 out.go:352] Setting JSON to true
	I1025 15:58:12.024005   11027 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6330,"bootTime":1729890762,"procs":514,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 15:58:12.024084   11027 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 15:58:12.028664   11027 out.go:97] [download-only-831000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 15:58:12.028747   11027 notify.go:220] Checking for updates...
	I1025 15:58:12.032854   11027 out.go:169] MINIKUBE_LOCATION=19758
	I1025 15:58:12.035892   11027 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 15:58:12.039814   11027 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 15:58:12.042856   11027 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 15:58:12.045879   11027 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	W1025 15:58:12.051878   11027 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 15:58:12.052077   11027 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 15:58:12.054795   11027 out.go:97] Using the qemu2 driver based on user configuration
	I1025 15:58:12.054803   11027 start.go:297] selected driver: qemu2
	I1025 15:58:12.054807   11027 start.go:901] validating driver "qemu2" against <nil>
	I1025 15:58:12.054857   11027 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 15:58:12.056190   11027 out.go:169] Automatically selected the socket_vmnet network
	I1025 15:58:12.061196   11027 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1025 15:58:12.061286   11027 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 15:58:12.061307   11027 cni.go:84] Creating CNI manager for ""
	I1025 15:58:12.061340   11027 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 15:58:12.061346   11027 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 15:58:12.061387   11027 start.go:340] cluster config:
	{Name:download-only-831000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-831000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 15:58:12.065722   11027 iso.go:125] acquiring lock: {Name:mk9b3871d00d41639b4aaa63c2532c78b4d0e24e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 15:58:12.068853   11027 out.go:97] Starting "download-only-831000" primary control-plane node in "download-only-831000" cluster
	I1025 15:58:12.068865   11027 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1025 15:58:12.129978   11027 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1025 15:58:12.130002   11027 cache.go:56] Caching tarball of preloaded images
	I1025 15:58:12.130210   11027 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1025 15:58:12.133388   11027 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I1025 15:58:12.133397   11027 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I1025 15:58:12.267296   11027 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /Users/jenkins/minikube-integration/19758-10490/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-831000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-831000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-831000
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.3s)

                                                
                                                
=== RUN   TestBinaryMirror
I1025 15:58:19.987421   10998 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-869000 --alsologtostderr --binary-mirror http://127.0.0.1:61946 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-869000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-869000
--- PASS: TestBinaryMirror (0.30s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-362000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-362000: exit status 85 (64.393375ms)

                                                
                                                
-- stdout --
	* Profile "addons-362000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-362000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-362000
addons_test.go:950: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-362000: exit status 85 (68.082834ms)

                                                
                                                
-- stdout --
	* Profile "addons-362000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-362000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.83s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I1025 16:09:38.940729   10998 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1025 16:09:38.940870   10998 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
W1025 16:09:40.881996   10998 install.go:62] docker-machine-driver-hyperkit: exit status 1
W1025 16:09:40.882228   10998 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1025 16:09:40.882271   10998 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate911872255/001/docker-machine-driver-hyperkit
I1025 16:09:41.397597   10998 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate911872255/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x1051426e0 0x1051426e0 0x1051426e0 0x1051426e0 0x1051426e0 0x1051426e0 0x1051426e0] Decompressors:map[bz2:0x1400000f840 gz:0x1400000f848 tar:0x1400000f7f0 tar.bz2:0x1400000f800 tar.gz:0x1400000f810 tar.xz:0x1400000f820 tar.zst:0x1400000f830 tbz2:0x1400000f800 tgz:0x1400000f810 txz:0x1400000f820 tzst:0x1400000f830 xz:0x1400000f850 zip:0x1400000f860 zst:0x1400000f858] Getters:map[file:0x1400071f1e0 http:0x1400069fb30 https:0x1400069fb80] Dir:false ProgressListener:<nil> Insecure:false DisableSym
links:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1025 16:09:41.397722   10998 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate911872255/001/docker-machine-driver-hyperkit
--- PASS: TestHyperKitDriverInstallOrUpdate (10.83s)

                                                
                                    
x
+
TestErrorSpam/start (0.4s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-870000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-870000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-870000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 start --dry-run
--- PASS: TestErrorSpam/start (0.40s)

                                                
                                    
x
+
TestErrorSpam/status (0.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-870000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-870000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 status: exit status 7 (36.369708ms)

                                                
                                                
-- stdout --
	nospam-870000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-870000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-870000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-870000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 status: exit status 7 (33.796541ms)

                                                
                                                
-- stdout --
	nospam-870000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-870000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-870000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-870000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 status: exit status 7 (34.268292ms)

                                                
                                                
-- stdout --
	nospam-870000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-870000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.10s)

                                                
                                    
x
+
TestErrorSpam/pause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-870000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-870000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 pause: exit status 83 (43.006416ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-870000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-870000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-870000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-870000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-870000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 pause: exit status 83 (44.568083ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-870000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-870000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-870000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-870000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-870000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 pause: exit status 83 (42.699667ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-870000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-870000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-870000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.13s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-870000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-870000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 unpause: exit status 83 (44.755875ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-870000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-870000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-870000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-870000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-870000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 unpause: exit status 83 (44.976292ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-870000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-870000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-870000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-870000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-870000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 unpause: exit status 83 (44.746625ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-870000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-870000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-870000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.13s)

                                                
                                    
x
+
TestErrorSpam/stop (9.17s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-870000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-870000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 stop: (3.762310791s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-870000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-870000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 stop: (1.896108s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-870000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-870000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-870000 stop: (3.513782916s)
--- PASS: TestErrorSpam/stop (9.17s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19758-10490/.minikube/files/etc/test/nested/copy/10998/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.97s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.97s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.02s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-543000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local440802813/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 cache add minikube-local-cache-test:functional-543000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 cache delete minikube-local-cache-test:functional-543000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-543000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.02s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 config get cpus: exit status 14 (35.164625ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 config get cpus: exit status 14 (37.239667ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-543000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-543000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (167.3345ms)

                                                
                                                
-- stdout --
	* [functional-543000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 15:59:50.850316   11585 out.go:345] Setting OutFile to fd 1 ...
	I1025 15:59:50.850503   11585 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 15:59:50.850507   11585 out.go:358] Setting ErrFile to fd 2...
	I1025 15:59:50.850510   11585 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 15:59:50.850661   11585 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 15:59:50.852044   11585 out.go:352] Setting JSON to false
	I1025 15:59:50.873000   11585 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6428,"bootTime":1729890762,"procs":521,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 15:59:50.873098   11585 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 15:59:50.875531   11585 out.go:177] * [functional-543000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 15:59:50.883023   11585 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 15:59:50.883067   11585 notify.go:220] Checking for updates...
	I1025 15:59:50.889988   11585 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 15:59:50.893003   11585 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 15:59:50.896039   11585 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 15:59:50.898962   11585 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	I1025 15:59:50.901994   11585 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 15:59:50.905381   11585 config.go:182] Loaded profile config "functional-543000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 15:59:50.905664   11585 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 15:59:50.909857   11585 out.go:177] * Using the qemu2 driver based on existing profile
	I1025 15:59:50.916966   11585 start.go:297] selected driver: qemu2
	I1025 15:59:50.916973   11585 start.go:901] validating driver "qemu2" against &{Name:functional-543000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-543000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 15:59:50.917034   11585 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 15:59:50.923927   11585 out.go:201] 
	W1025 15:59:50.928069   11585 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1025 15:59:50.931886   11585 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-543000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-543000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-543000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (114.824791ms)

                                                
                                                
-- stdout --
	* [functional-543000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 15:59:51.082108   11596 out.go:345] Setting OutFile to fd 1 ...
	I1025 15:59:51.082263   11596 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 15:59:51.082266   11596 out.go:358] Setting ErrFile to fd 2...
	I1025 15:59:51.082269   11596 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 15:59:51.082403   11596 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19758-10490/.minikube/bin
	I1025 15:59:51.083942   11596 out.go:352] Setting JSON to false
	I1025 15:59:51.102378   11596 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6429,"bootTime":1729890762,"procs":521,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 15:59:51.102455   11596 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 15:59:51.106934   11596 out.go:177] * [functional-543000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	I1025 15:59:51.114020   11596 out.go:177]   - MINIKUBE_LOCATION=19758
	I1025 15:59:51.114058   11596 notify.go:220] Checking for updates...
	I1025 15:59:51.120977   11596 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	I1025 15:59:51.123974   11596 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 15:59:51.126988   11596 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 15:59:51.129959   11596 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	I1025 15:59:51.133004   11596 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 15:59:51.134538   11596 config.go:182] Loaded profile config "functional-543000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1025 15:59:51.134810   11596 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 15:59:51.138914   11596 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I1025 15:59:51.144914   11596 start.go:297] selected driver: qemu2
	I1025 15:59:51.144921   11596 start.go:901] validating driver "qemu2" against &{Name:functional-543000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729002334-19806@sha256:40e85bdbd09a1ee487c66779d8bda357f3aa054bb4ec597b30029882beba918e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-543000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 15:59:51.144976   11596 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 15:59:51.151927   11596 out.go:201] 
	W1025 15:59:51.156016   11596 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1025 15:59:51.159883   11596 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.7227895s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-543000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-543000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 image rm kicbase/echo-server:functional-543000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-543000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 image save --daemon kicbase/echo-server:functional-543000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-543000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "53.557084ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "37.882792ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "51.95375ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "37.73725ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.013398917s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-543000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.17s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-543000
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-543000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-543000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.82s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-501000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-501000 --output=json --user=testUser: (1.817212667s)
--- PASS: TestJSONOutput/stop/Command (1.82s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-474000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-474000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (97.9225ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"10bbbfd5-ca0d-47f4-9b4e-35013b71b1fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-474000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d1954cc5-081c-46e4-83c9-61b13284cef6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19758"}}
	{"specversion":"1.0","id":"f96b6c43-2db1-4739-b89b-675e039d6a5a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig"}}
	{"specversion":"1.0","id":"9685a9e0-baec-40c5-966c-97358a097d32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"c02573ff-b9b1-4cce-a0b0-080de1c3c3ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"521d6dff-842b-4b99-abda-f936e8b6d9e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube"}}
	{"specversion":"1.0","id":"d872560f-5d9f-4a64-885a-31120fb70490","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7e6e6588-bb20-458e-a627-e7ba4de8a7c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-474000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-474000
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.13s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-999000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-999000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (103.879166ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-999000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19758
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19758-10490/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19758-10490/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-999000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-999000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (47.68825ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-999000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-999000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.660753875s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.656490084s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-999000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-999000: (2.091989166s)
--- PASS: TestNoKubernetes/serial/Stop (2.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-999000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-999000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (45.668042ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-999000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-999000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.67s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-782000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-213000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-213000 --alsologtostderr -v=3: (2.033956333s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-213000 -n old-k8s-version-213000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-213000 -n old-k8s-version-213000: exit status 7 (44.888875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-213000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-140000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-140000 --alsologtostderr -v=3: (3.003098667s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-140000 -n no-preload-140000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-140000 -n no-preload-140000: exit status 7 (52.116834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-140000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-409000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-409000 --alsologtostderr -v=3: (3.207132s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-548000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-548000 --alsologtostderr -v=3: (3.132721666s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-409000 -n embed-certs-409000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-409000 -n embed-certs-409000: exit status 7 (57.299458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-409000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-548000 -n default-k8s-diff-port-548000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-548000 -n default-k8s-diff-port-548000: exit status 7 (63.053292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-548000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-262000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-262000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-262000 --alsologtostderr -v=3: (3.274352041s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-262000 -n newest-cni-262000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-262000 -n newest-cni-262000: exit status 7 (63.855208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-262000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/258)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-543000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1447893449/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1729897161713132000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1447893449/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1729897161713132000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1447893449/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1729897161713132000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1447893449/001/test-1729897161713132000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (51.876416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
I1025 15:59:21.765581   10998 retry.go:31] will retry after 252.495996ms: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (98.629875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
I1025 15:59:22.119053   10998 retry.go:31] will retry after 438.086807ms: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.453584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
I1025 15:59:22.649917   10998 retry.go:31] will retry after 602.990536ms: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.019666ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
I1025 15:59:23.346282   10998 retry.go:31] will retry after 1.055199145s: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (92.131708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
I1025 15:59:24.495960   10998 retry.go:31] will retry after 1.35611493s: exit status 83
I1025 15:59:24.933195   10998 retry.go:31] will retry after 9.34967011s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (92.934167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
I1025 15:59:25.947463   10998 retry.go:31] will retry after 3.559406311s: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (95.284167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 ssh "sudo umount -f /mount-9p": exit status 83 (51.685458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-543000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-543000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1447893449/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (8.06s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (10.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-543000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1152001920/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (63.425167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
I1025 15:59:29.841640   10998 retry.go:31] will retry after 347.526523ms: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.432041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
I1025 15:59:30.278006   10998 retry.go:31] will retry after 931.845578ms: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.287667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
I1025 15:59:31.302476   10998 retry.go:31] will retry after 1.40564006s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.696209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
I1025 15:59:32.801206   10998 retry.go:31] will retry after 1.420371783s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T /mount-9p | grep 9p"
I1025 15:59:34.285024   10998 retry.go:31] will retry after 8.011853259s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.984125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
I1025 15:59:34.315889   10998 retry.go:31] will retry after 3.005328474s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.376625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
I1025 15:59:37.413896   10998 retry.go:31] will retry after 2.739338326s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.573708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 ssh "sudo umount -f /mount-9p": exit status 83 (49.819833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-543000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-543000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1152001920/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (10.64s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (10.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-543000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1571730776/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-543000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1571730776/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-543000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1571730776/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T" /mount1: exit status 83 (84.761042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
I1025 15:59:40.504398   10998 retry.go:31] will retry after 749.774421ms: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T" /mount1: exit status 83 (92.677125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
I1025 15:59:41.349219   10998 retry.go:31] will retry after 802.449322ms: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T" /mount1: exit status 83 (90.221167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
I1025 15:59:42.244207   10998 retry.go:31] will retry after 1.496143115s: exit status 83
I1025 15:59:42.298911   10998 retry.go:31] will retry after 21.576724149s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T" /mount1: exit status 83 (91.410833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
I1025 15:59:43.834051   10998 retry.go:31] will retry after 1.245000476s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T" /mount1: exit status 83 (90.113875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
I1025 15:59:45.171450   10998 retry.go:31] will retry after 1.384102924s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T" /mount1: exit status 83 (91.113292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
I1025 15:59:46.649057   10998 retry.go:31] will retry after 3.647489701s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-543000 ssh "findmnt -T" /mount1: exit status 83 (89.752625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-543000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-543000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1571730776/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-543000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1571730776/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-543000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1571730776/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (10.36s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-864000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-864000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-864000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-864000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-864000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-864000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-864000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-864000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-864000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-864000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-864000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-864000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-864000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-864000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-864000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-864000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-864000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-864000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-864000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-864000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-864000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-864000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-864000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-864000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-864000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-864000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-864000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-864000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-864000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-864000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-864000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-864000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-864000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-864000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-864000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-864000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-864000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-864000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-864000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-864000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-864000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-864000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-864000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-864000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-864000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-864000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-864000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-864000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-864000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-864000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-864000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-864000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-864000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-864000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-864000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-864000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-864000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-864000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-864000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-864000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-864000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-864000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-864000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-864000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-864000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-864000"

                                                
                                                
----------------------- debugLogs end: cilium-864000 [took: 2.36715875s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-864000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-864000
--- SKIP: TestNetworkPlugins/group/cilium (2.48s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-910000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-910000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard