Test Report: QEMU_macOS 20052

                    
                      8d1e3f592e1f661c71a144f8266060bd168d3f35:2024-12-05:37356
                    
                

Test fail (156/258)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 19.76
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 10.01
27 TestAddons/Setup 10.16
28 TestCertOptions 10.11
29 TestCertExpiration 195.33
30 TestDockerFlags 10.35
31 TestForceSystemdFlag 10.27
32 TestForceSystemdEnv 10.08
38 TestErrorSpam/setup 9.85
47 TestFunctional/serial/StartWithProxy 10.01
49 TestFunctional/serial/SoftStart 5.27
50 TestFunctional/serial/KubeContext 0.06
51 TestFunctional/serial/KubectlGetPods 0.06
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.05
59 TestFunctional/serial/CacheCmd/cache/cache_reload 0.18
61 TestFunctional/serial/MinikubeKubectlCmd 0.75
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.21
63 TestFunctional/serial/ExtraConfig 5.27
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 0.08
66 TestFunctional/serial/LogsFileCmd 0.08
67 TestFunctional/serial/InvalidService 0.03
70 TestFunctional/parallel/DashboardCmd 0.21
73 TestFunctional/parallel/StatusCmd 0.14
77 TestFunctional/parallel/ServiceCmdConnect 0.14
79 TestFunctional/parallel/PersistentVolumeClaim 0.04
81 TestFunctional/parallel/SSHCmd 0.13
82 TestFunctional/parallel/CpCmd 0.29
84 TestFunctional/parallel/FileSync 0.09
85 TestFunctional/parallel/CertSync 0.31
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
95 TestFunctional/parallel/Version/components 0.05
96 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
97 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
98 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
99 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
100 TestFunctional/parallel/ImageCommands/ImageBuild 0.13
102 TestFunctional/parallel/DockerEnv/bash 0.05
103 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
104 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
105 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
106 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
107 TestFunctional/parallel/ServiceCmd/List 0.06
108 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
109 TestFunctional/parallel/ServiceCmd/HTTPS 0.05
110 TestFunctional/parallel/ServiceCmd/Format 0.05
111 TestFunctional/parallel/ServiceCmd/URL 0.05
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 110.66
118 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.32
119 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.28
120 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.19
121 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
123 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.08
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.07
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 30.05
141 TestMultiControlPlane/serial/StartCluster 10.07
142 TestMultiControlPlane/serial/DeployApp 114.91
143 TestMultiControlPlane/serial/PingHostFromPods 0.1
144 TestMultiControlPlane/serial/AddWorkerNode 0.08
145 TestMultiControlPlane/serial/NodeLabels 0.06
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.09
147 TestMultiControlPlane/serial/CopyFile 0.07
148 TestMultiControlPlane/serial/StopSecondaryNode 0.12
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.09
150 TestMultiControlPlane/serial/RestartSecondaryNode 40.29
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.09
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 9.08
153 TestMultiControlPlane/serial/DeleteSecondaryNode 0.11
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.09
155 TestMultiControlPlane/serial/StopCluster 3.69
156 TestMultiControlPlane/serial/RestartCluster 5.27
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.09
158 TestMultiControlPlane/serial/AddSecondaryNode 0.08
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.09
162 TestImageBuild/serial/Setup 10.02
165 TestJSONOutput/start/Command 9.82
171 TestJSONOutput/pause/Command 0.09
177 TestJSONOutput/unpause/Command 0.05
194 TestMinikubeProfile 10.37
197 TestMountStart/serial/StartWithMountFirst 10.19
200 TestMultiNode/serial/FreshStart2Nodes 10.16
201 TestMultiNode/serial/DeployApp2Nodes 103.44
202 TestMultiNode/serial/PingHostFrom2Pods 0.1
203 TestMultiNode/serial/AddNode 0.08
204 TestMultiNode/serial/MultiNodeLabels 0.06
205 TestMultiNode/serial/ProfileList 0.09
206 TestMultiNode/serial/CopyFile 0.07
207 TestMultiNode/serial/StopNode 0.16
208 TestMultiNode/serial/StartAfterStop 50.73
209 TestMultiNode/serial/RestartKeepsNodes 7.27
210 TestMultiNode/serial/DeleteNode 0.11
211 TestMultiNode/serial/StopMultiNode 3.94
212 TestMultiNode/serial/RestartMultiNode 5.27
213 TestMultiNode/serial/ValidateNameConflict 20.15
217 TestPreload 10.05
219 TestScheduledStopUnix 10.14
220 TestSkaffold 12.25
223 TestRunningBinaryUpgrade 601.03
225 TestKubernetesUpgrade 17.34
238 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 0.95
239 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.13
241 TestStoppedBinaryUpgrade/Upgrade 573.78
243 TestPause/serial/Start 9.96
253 TestNoKubernetes/serial/StartWithK8s 9.88
254 TestNoKubernetes/serial/StartWithStopK8s 5.32
255 TestNoKubernetes/serial/Start 5.33
259 TestNoKubernetes/serial/StartNoArgs 5.37
261 TestNetworkPlugins/group/auto/Start 9.84
262 TestNetworkPlugins/group/kindnet/Start 10
263 TestNetworkPlugins/group/calico/Start 10.06
264 TestNetworkPlugins/group/custom-flannel/Start 9.77
265 TestNetworkPlugins/group/false/Start 9.88
266 TestNetworkPlugins/group/enable-default-cni/Start 9.96
267 TestNetworkPlugins/group/flannel/Start 9.9
268 TestNetworkPlugins/group/bridge/Start 9.84
269 TestNetworkPlugins/group/kubenet/Start 9.82
272 TestStartStop/group/old-k8s-version/serial/FirstStart 10
273 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
274 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.13
277 TestStartStop/group/old-k8s-version/serial/SecondStart 5.26
278 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
279 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
280 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.08
281 TestStartStop/group/old-k8s-version/serial/Pause 0.11
283 TestStartStop/group/no-preload/serial/FirstStart 9.9
285 TestStartStop/group/embed-certs/serial/FirstStart 9.91
286 TestStartStop/group/no-preload/serial/DeployApp 0.1
287 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
289 TestStartStop/group/embed-certs/serial/DeployApp 0.1
290 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
293 TestStartStop/group/no-preload/serial/SecondStart 5.27
295 TestStartStop/group/embed-certs/serial/SecondStart 5.3
296 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
297 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
298 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
299 TestStartStop/group/no-preload/serial/Pause 0.11
301 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10.14
302 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.04
303 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
304 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.08
305 TestStartStop/group/embed-certs/serial/Pause 0.11
307 TestStartStop/group/newest-cni/serial/FirstStart 9.96
308 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
315 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.27
317 TestStartStop/group/newest-cni/serial/SecondStart 5.27
318 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.04
319 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
320 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
321 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
324 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
325 TestStartStop/group/newest-cni/serial/Pause 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (19.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-751000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-751000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (19.760947958s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"14cd826b-8712-4720-9847-7ccd355d9b57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-751000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c9fdee6b-3639-42b4-b95c-e51b3e2a56a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20052"}}
	{"specversion":"1.0","id":"49d72db0-e71d-47ea-90ba-7fc44df2ff9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig"}}
	{"specversion":"1.0","id":"a925a658-b2cf-4536-9d63-57ab80a73ca5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"5ca01f81-235c-4b0a-bffc-1192a49905b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a7cd166a-9366-4cd0-9ba1-e4ea585a0c0f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube"}}
	{"specversion":"1.0","id":"96ea4ac8-f18a-497c-9728-bce614abad2b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"91e0f88d-6dc1-4ab1-ae87-723e448199c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"2bd9dc85-ed49-46f2-ae60-aed994bcf44d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"c3fe88da-065c-4133-b9f2-2d4aefd134d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"5f050fb4-7d0a-4916-b1d3-249e1f04eb1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-751000\" primary control-plane node in \"download-only-751000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"77e2234e-f53d-40e7-b5c3-68c4b711f63b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ba2a427e-bb59-4321-91b4-63e873d95df1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/20052-8600/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x105614320 0x105614320 0x105614320 0x105614320 0x105614320 0x105614320 0x105614320] Decompressors:map[bz2:0x14000491380 gz:0x14000491388 tar:0x14000491330 tar.bz2:0x14000491340 tar.gz:0x14000491350 tar.xz:0x14000491360 tar.zst:0x14000491370 tbz2:0x14000491340 tgz:0x14
000491350 txz:0x14000491360 tzst:0x14000491370 xz:0x14000491390 zip:0x140004913a0 zst:0x14000491398] Getters:map[file:0x14000790c60 http:0x14000d16140 https:0x14000d16190] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"12497134-9d30-4a94-8d39-d906f111dcb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 10:49:55.395181    9137 out.go:345] Setting OutFile to fd 1 ...
	I1205 10:49:55.395343    9137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:49:55.395347    9137 out.go:358] Setting ErrFile to fd 2...
	I1205 10:49:55.395350    9137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:49:55.395467    9137 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	W1205 10:49:55.395576    9137 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/20052-8600/.minikube/config/config.json: open /Users/jenkins/minikube-integration/20052-8600/.minikube/config/config.json: no such file or directory
	I1205 10:49:55.396912    9137 out.go:352] Setting JSON to true
	I1205 10:49:55.414870    9137 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4767,"bootTime":1733419828,"procs":548,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 10:49:55.414946    9137 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 10:49:55.420899    9137 out.go:97] [download-only-751000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 10:49:55.421056    9137 notify.go:220] Checking for updates...
	W1205 10:49:55.421095    9137 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball: no such file or directory
	I1205 10:49:55.424769    9137 out.go:169] MINIKUBE_LOCATION=20052
	I1205 10:49:55.427802    9137 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 10:49:55.432843    9137 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 10:49:55.436797    9137 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 10:49:55.439823    9137 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	W1205 10:49:55.445735    9137 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1205 10:49:55.445987    9137 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 10:49:55.449755    9137 out.go:97] Using the qemu2 driver based on user configuration
	I1205 10:49:55.449774    9137 start.go:297] selected driver: qemu2
	I1205 10:49:55.449787    9137 start.go:901] validating driver "qemu2" against <nil>
	I1205 10:49:55.449860    9137 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 10:49:55.452804    9137 out.go:169] Automatically selected the socket_vmnet network
	I1205 10:49:55.458216    9137 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1205 10:49:55.458322    9137 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 10:49:55.458369    9137 cni.go:84] Creating CNI manager for ""
	I1205 10:49:55.458401    9137 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1205 10:49:55.458461    9137 start.go:340] cluster config:
	{Name:download-only-751000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-751000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 10:49:55.462970    9137 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 10:49:55.467805    9137 out.go:97] Downloading VM boot image ...
	I1205 10:49:55.467826    9137 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso
	I1205 10:50:04.196206    9137 out.go:97] Starting "download-only-751000" primary control-plane node in "download-only-751000" cluster
	I1205 10:50:04.196226    9137 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1205 10:50:04.266679    9137 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1205 10:50:04.266703    9137 cache.go:56] Caching tarball of preloaded images
	I1205 10:50:04.266918    9137 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1205 10:50:04.273173    9137 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1205 10:50:04.273182    9137 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1205 10:50:04.365810    9137 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1205 10:50:13.759386    9137 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1205 10:50:13.759553    9137 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1205 10:50:14.454153    9137 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1205 10:50:14.454348    9137 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/download-only-751000/config.json ...
	I1205 10:50:14.454364    9137 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/download-only-751000/config.json: {Name:mk74e11fe0fc9351120f8578bdc0f833b5da9df4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 10:50:14.454615    9137 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1205 10:50:14.454873    9137 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1205 10:50:15.055057    9137 out.go:193] 
	W1205 10:50:15.064136    9137 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/20052-8600/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x105614320 0x105614320 0x105614320 0x105614320 0x105614320 0x105614320 0x105614320] Decompressors:map[bz2:0x14000491380 gz:0x14000491388 tar:0x14000491330 tar.bz2:0x14000491340 tar.gz:0x14000491350 tar.xz:0x14000491360 tar.zst:0x14000491370 tbz2:0x14000491340 tgz:0x14000491350 txz:0x14000491360 tzst:0x14000491370 xz:0x14000491390 zip:0x140004913a0 zst:0x14000491398] Getters:map[file:0x14000790c60 http:0x14000d16140 https:0x14000d16190] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1205 10:50:15.064163    9137 out_reason.go:110] 
	W1205 10:50:15.073069    9137 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 10:50:15.077028    9137 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-751000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (19.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/20052-8600/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.01s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-687000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-687000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.849900125s)

                                                
                                                
-- stdout --
	* [offline-docker-687000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-687000" primary control-plane node in "offline-docker-687000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-687000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:01:54.297671   10848 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:01:54.297830   10848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:01:54.297833   10848 out.go:358] Setting ErrFile to fd 2...
	I1205 11:01:54.297836   10848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:01:54.297968   10848 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:01:54.299137   10848 out.go:352] Setting JSON to false
	I1205 11:01:54.318537   10848 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5486,"bootTime":1733419828,"procs":551,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 11:01:54.318633   10848 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:01:54.323487   10848 out.go:177] * [offline-docker-687000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:01:54.331532   10848 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 11:01:54.331535   10848 notify.go:220] Checking for updates...
	I1205 11:01:54.339316   10848 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 11:01:54.342478   10848 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:01:54.345498   10848 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:01:54.348538   10848 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	I1205 11:01:54.351483   10848 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:01:54.354884   10848 config.go:182] Loaded profile config "multinode-454000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:01:54.354955   10848 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:01:54.359458   10848 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:01:54.366461   10848 start.go:297] selected driver: qemu2
	I1205 11:01:54.366468   10848 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:01:54.366475   10848 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:01:54.368795   10848 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:01:54.371429   10848 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:01:54.374530   10848 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:01:54.374544   10848 cni.go:84] Creating CNI manager for ""
	I1205 11:01:54.374567   10848 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:01:54.374571   10848 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 11:01:54.374615   10848 start.go:340] cluster config:
	{Name:offline-docker-687000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-687000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:01:54.379244   10848 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:01:54.386432   10848 out.go:177] * Starting "offline-docker-687000" primary control-plane node in "offline-docker-687000" cluster
	I1205 11:01:54.390412   10848 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:01:54.390447   10848 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:01:54.390466   10848 cache.go:56] Caching tarball of preloaded images
	I1205 11:01:54.390557   10848 preload.go:172] Found /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:01:54.390564   10848 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:01:54.390631   10848 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/offline-docker-687000/config.json ...
	I1205 11:01:54.390642   10848 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/offline-docker-687000/config.json: {Name:mk4ebbaf516c3e42c6036146be31a9d6d18a9d58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:01:54.391002   10848 start.go:360] acquireMachinesLock for offline-docker-687000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:01:54.391049   10848 start.go:364] duration metric: took 39.5µs to acquireMachinesLock for "offline-docker-687000"
	I1205 11:01:54.391060   10848 start.go:93] Provisioning new machine with config: &{Name:offline-docker-687000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-687000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:01:54.391088   10848 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:01:54.395414   10848 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1205 11:01:54.410824   10848 start.go:159] libmachine.API.Create for "offline-docker-687000" (driver="qemu2")
	I1205 11:01:54.410856   10848 client.go:168] LocalClient.Create starting
	I1205 11:01:54.410934   10848 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 11:01:54.410975   10848 main.go:141] libmachine: Decoding PEM data...
	I1205 11:01:54.410989   10848 main.go:141] libmachine: Parsing certificate...
	I1205 11:01:54.411039   10848 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 11:01:54.411068   10848 main.go:141] libmachine: Decoding PEM data...
	I1205 11:01:54.411076   10848 main.go:141] libmachine: Parsing certificate...
	I1205 11:01:54.411467   10848 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:01:54.577492   10848 main.go:141] libmachine: Creating SSH key...
	I1205 11:01:54.659176   10848 main.go:141] libmachine: Creating Disk image...
	I1205 11:01:54.659183   10848 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:01:54.661629   10848 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/offline-docker-687000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/offline-docker-687000/disk.qcow2
	I1205 11:01:54.672257   10848 main.go:141] libmachine: STDOUT: 
	I1205 11:01:54.672280   10848 main.go:141] libmachine: STDERR: 
	I1205 11:01:54.672361   10848 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/offline-docker-687000/disk.qcow2 +20000M
	I1205 11:01:54.681890   10848 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:01:54.681911   10848 main.go:141] libmachine: STDERR: 
	I1205 11:01:54.681932   10848 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/offline-docker-687000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/offline-docker-687000/disk.qcow2
	I1205 11:01:54.681938   10848 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:01:54.681951   10848 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:01:54.681983   10848 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/offline-docker-687000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/offline-docker-687000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/offline-docker-687000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:10:2b:24:05:ad -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/offline-docker-687000/disk.qcow2
	I1205 11:01:54.684127   10848 main.go:141] libmachine: STDOUT: 
	I1205 11:01:54.684153   10848 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:01:54.684175   10848 client.go:171] duration metric: took 273.315375ms to LocalClient.Create
	I1205 11:01:56.686214   10848 start.go:128] duration metric: took 2.295144167s to createHost
	I1205 11:01:56.686236   10848 start.go:83] releasing machines lock for "offline-docker-687000", held for 2.295206916s
	W1205 11:01:56.686250   10848 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:01:56.694145   10848 out.go:177] * Deleting "offline-docker-687000" in qemu2 ...
	W1205 11:01:56.711497   10848 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:01:56.711505   10848 start.go:729] Will try again in 5 seconds ...
	I1205 11:02:01.713750   10848 start.go:360] acquireMachinesLock for offline-docker-687000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:02:01.714314   10848 start.go:364] duration metric: took 433.791µs to acquireMachinesLock for "offline-docker-687000"
	I1205 11:02:01.714453   10848 start.go:93] Provisioning new machine with config: &{Name:offline-docker-687000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-687000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:02:01.714754   10848 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:02:01.728139   10848 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1205 11:02:01.778191   10848 start.go:159] libmachine.API.Create for "offline-docker-687000" (driver="qemu2")
	I1205 11:02:01.778260   10848 client.go:168] LocalClient.Create starting
	I1205 11:02:01.778413   10848 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 11:02:01.778503   10848 main.go:141] libmachine: Decoding PEM data...
	I1205 11:02:01.778521   10848 main.go:141] libmachine: Parsing certificate...
	I1205 11:02:01.778591   10848 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 11:02:01.778653   10848 main.go:141] libmachine: Decoding PEM data...
	I1205 11:02:01.778665   10848 main.go:141] libmachine: Parsing certificate...
	I1205 11:02:01.779445   10848 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:02:01.954115   10848 main.go:141] libmachine: Creating SSH key...
	I1205 11:02:02.044528   10848 main.go:141] libmachine: Creating Disk image...
	I1205 11:02:02.044536   10848 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:02:02.044736   10848 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/offline-docker-687000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/offline-docker-687000/disk.qcow2
	I1205 11:02:02.054527   10848 main.go:141] libmachine: STDOUT: 
	I1205 11:02:02.054550   10848 main.go:141] libmachine: STDERR: 
	I1205 11:02:02.054612   10848 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/offline-docker-687000/disk.qcow2 +20000M
	I1205 11:02:02.063232   10848 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:02:02.063254   10848 main.go:141] libmachine: STDERR: 
	I1205 11:02:02.063273   10848 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/offline-docker-687000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/offline-docker-687000/disk.qcow2
	I1205 11:02:02.063277   10848 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:02:02.063287   10848 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:02:02.063317   10848 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/offline-docker-687000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/offline-docker-687000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/offline-docker-687000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:bf:84:b2:a8:c7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/offline-docker-687000/disk.qcow2
	I1205 11:02:02.065051   10848 main.go:141] libmachine: STDOUT: 
	I1205 11:02:02.065065   10848 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:02:02.065078   10848 client.go:171] duration metric: took 286.815459ms to LocalClient.Create
	I1205 11:02:04.067241   10848 start.go:128] duration metric: took 2.352482375s to createHost
	I1205 11:02:04.067323   10848 start.go:83] releasing machines lock for "offline-docker-687000", held for 2.35300825s
	W1205 11:02:04.067689   10848 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-687000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-687000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:02:04.081313   10848 out.go:201] 
	W1205 11:02:04.086492   10848 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:02:04.086533   10848 out.go:270] * 
	* 
	W1205 11:02:04.089386   10848 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:02:04.099292   10848 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-687000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-12-05 11:02:04.115113 -0800 PST m=+728.808925501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-687000 -n offline-docker-687000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-687000 -n offline-docker-687000: exit status 7 (73.350625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-687000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-687000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-687000
--- FAIL: TestOffline (10.01s)

                                                
                                    
x
+
TestAddons/Setup (10.16s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-904000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-904000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: exit status 80 (10.159187709s)

                                                
                                                
-- stdout --
	* [addons-904000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-904000" primary control-plane node in "addons-904000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-904000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 10:50:25.957284    9211 out.go:345] Setting OutFile to fd 1 ...
	I1205 10:50:25.957431    9211 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:50:25.957435    9211 out.go:358] Setting ErrFile to fd 2...
	I1205 10:50:25.957437    9211 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:50:25.957563    9211 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 10:50:25.958813    9211 out.go:352] Setting JSON to false
	I1205 10:50:25.976470    9211 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4797,"bootTime":1733419828,"procs":538,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 10:50:25.976541    9211 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 10:50:25.981215    9211 out.go:177] * [addons-904000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 10:50:25.988062    9211 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 10:50:25.988110    9211 notify.go:220] Checking for updates...
	I1205 10:50:25.995053    9211 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 10:50:25.999094    9211 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 10:50:26.003189    9211 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 10:50:26.006075    9211 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	I1205 10:50:26.009102    9211 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 10:50:26.012181    9211 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 10:50:26.016110    9211 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 10:50:26.023037    9211 start.go:297] selected driver: qemu2
	I1205 10:50:26.023045    9211 start.go:901] validating driver "qemu2" against <nil>
	I1205 10:50:26.023052    9211 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 10:50:26.025604    9211 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 10:50:26.029100    9211 out.go:177] * Automatically selected the socket_vmnet network
	I1205 10:50:26.032091    9211 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 10:50:26.032111    9211 cni.go:84] Creating CNI manager for ""
	I1205 10:50:26.032132    9211 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 10:50:26.032136    9211 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 10:50:26.032177    9211 start.go:340] cluster config:
	{Name:addons-904000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-904000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 10:50:26.036750    9211 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 10:50:26.045077    9211 out.go:177] * Starting "addons-904000" primary control-plane node in "addons-904000" cluster
	I1205 10:50:26.049082    9211 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 10:50:26.049100    9211 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 10:50:26.049113    9211 cache.go:56] Caching tarball of preloaded images
	I1205 10:50:26.049204    9211 preload.go:172] Found /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 10:50:26.049210    9211 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 10:50:26.049440    9211 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/addons-904000/config.json ...
	I1205 10:50:26.049452    9211 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/addons-904000/config.json: {Name:mkacdbbc5afa08a836e998a270352aef931dfabb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 10:50:26.049946    9211 start.go:360] acquireMachinesLock for addons-904000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 10:50:26.050041    9211 start.go:364] duration metric: took 88.875µs to acquireMachinesLock for "addons-904000"
	I1205 10:50:26.050054    9211 start.go:93] Provisioning new machine with config: &{Name:addons-904000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:addons-904000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 10:50:26.050090    9211 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 10:50:26.056027    9211 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1205 10:50:26.073699    9211 start.go:159] libmachine.API.Create for "addons-904000" (driver="qemu2")
	I1205 10:50:26.073761    9211 client.go:168] LocalClient.Create starting
	I1205 10:50:26.073922    9211 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 10:50:26.241713    9211 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 10:50:26.371677    9211 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 10:50:26.536253    9211 main.go:141] libmachine: Creating SSH key...
	I1205 10:50:26.594868    9211 main.go:141] libmachine: Creating Disk image...
	I1205 10:50:26.594873    9211 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 10:50:26.595132    9211 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/addons-904000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/addons-904000/disk.qcow2
	I1205 10:50:26.605171    9211 main.go:141] libmachine: STDOUT: 
	I1205 10:50:26.605190    9211 main.go:141] libmachine: STDERR: 
	I1205 10:50:26.605258    9211 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/addons-904000/disk.qcow2 +20000M
	I1205 10:50:26.613601    9211 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 10:50:26.613616    9211 main.go:141] libmachine: STDERR: 
	I1205 10:50:26.613629    9211 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/addons-904000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/addons-904000/disk.qcow2
	I1205 10:50:26.613632    9211 main.go:141] libmachine: Starting QEMU VM...
	I1205 10:50:26.613674    9211 qemu.go:418] Using hvf for hardware acceleration
	I1205 10:50:26.613701    9211 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/addons-904000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/addons-904000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/addons-904000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:92:12:35:44:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/addons-904000/disk.qcow2
	I1205 10:50:26.615446    9211 main.go:141] libmachine: STDOUT: 
	I1205 10:50:26.615460    9211 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 10:50:26.615488    9211 client.go:171] duration metric: took 541.718792ms to LocalClient.Create
	I1205 10:50:28.617737    9211 start.go:128] duration metric: took 2.567561584s to createHost
	I1205 10:50:28.617845    9211 start.go:83] releasing machines lock for "addons-904000", held for 2.56781925s
	W1205 10:50:28.617897    9211 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 10:50:28.632019    9211 out.go:177] * Deleting "addons-904000" in qemu2 ...
	W1205 10:50:28.661791    9211 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 10:50:28.661825    9211 start.go:729] Will try again in 5 seconds ...
	I1205 10:50:33.664055    9211 start.go:360] acquireMachinesLock for addons-904000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 10:50:33.664718    9211 start.go:364] duration metric: took 526.75µs to acquireMachinesLock for "addons-904000"
	I1205 10:50:33.664871    9211 start.go:93] Provisioning new machine with config: &{Name:addons-904000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:addons-904000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 10:50:33.665204    9211 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 10:50:33.674861    9211 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1205 10:50:33.723857    9211 start.go:159] libmachine.API.Create for "addons-904000" (driver="qemu2")
	I1205 10:50:33.723908    9211 client.go:168] LocalClient.Create starting
	I1205 10:50:33.724064    9211 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 10:50:33.724148    9211 main.go:141] libmachine: Decoding PEM data...
	I1205 10:50:33.724174    9211 main.go:141] libmachine: Parsing certificate...
	I1205 10:50:33.724247    9211 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 10:50:33.724307    9211 main.go:141] libmachine: Decoding PEM data...
	I1205 10:50:33.724319    9211 main.go:141] libmachine: Parsing certificate...
	I1205 10:50:33.725542    9211 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 10:50:33.899442    9211 main.go:141] libmachine: Creating SSH key...
	I1205 10:50:34.015878    9211 main.go:141] libmachine: Creating Disk image...
	I1205 10:50:34.015886    9211 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 10:50:34.016126    9211 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/addons-904000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/addons-904000/disk.qcow2
	I1205 10:50:34.026102    9211 main.go:141] libmachine: STDOUT: 
	I1205 10:50:34.026125    9211 main.go:141] libmachine: STDERR: 
	I1205 10:50:34.026194    9211 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/addons-904000/disk.qcow2 +20000M
	I1205 10:50:34.034667    9211 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 10:50:34.034687    9211 main.go:141] libmachine: STDERR: 
	I1205 10:50:34.034702    9211 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/addons-904000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/addons-904000/disk.qcow2
	I1205 10:50:34.034709    9211 main.go:141] libmachine: Starting QEMU VM...
	I1205 10:50:34.034755    9211 qemu.go:418] Using hvf for hardware acceleration
	I1205 10:50:34.034789    9211 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/addons-904000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/addons-904000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/addons-904000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:27:c0:30:af:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/addons-904000/disk.qcow2
	I1205 10:50:34.036571    9211 main.go:141] libmachine: STDOUT: 
	I1205 10:50:34.036585    9211 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 10:50:34.036599    9211 client.go:171] duration metric: took 312.6885ms to LocalClient.Create
	I1205 10:50:36.039241    9211 start.go:128] duration metric: took 2.373708042s to createHost
	I1205 10:50:36.039358    9211 start.go:83] releasing machines lock for "addons-904000", held for 2.374633917s
	W1205 10:50:36.039829    9211 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p addons-904000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-904000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 10:50:36.050359    9211 out.go:201] 
	W1205 10:50:36.058411    9211 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 10:50:36.058453    9211 out.go:270] * 
	* 
	W1205 10:50:36.061216    9211 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 10:50:36.069349    9211 out.go:201] 

                                                
                                                
** /stderr **
addons_test.go:109: out/minikube-darwin-arm64 start -p addons-904000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher failed: exit status 80
--- FAIL: TestAddons/Setup (10.16s)

                                                
                                    
x
+
TestCertOptions (10.11s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-748000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-748000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.8228025s)

                                                
                                                
-- stdout --
	* [cert-options-748000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-748000" primary control-plane node in "cert-options-748000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-748000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-748000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-748000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-748000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-748000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (88.345708ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-748000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-748000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-748000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-748000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-748000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-748000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (46.042542ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-748000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-748000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-748000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-748000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-748000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-12-05 11:02:34.694803 -0800 PST m=+759.388938460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-748000 -n cert-options-748000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-748000 -n cert-options-748000: exit status 7 (34.843ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-748000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-748000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-748000
--- FAIL: TestCertOptions (10.11s)

                                                
                                    
x
+
TestCertExpiration (195.33s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-404000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-404000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.940477125s)

                                                
                                                
-- stdout --
	* [cert-expiration-404000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-404000" primary control-plane node in "cert-expiration-404000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-404000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-404000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-404000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-404000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-404000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.230141333s)

                                                
                                                
-- stdout --
	* [cert-expiration-404000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-404000" primary control-plane node in "cert-expiration-404000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-404000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-404000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-404000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-404000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-404000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-404000" primary control-plane node in "cert-expiration-404000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-404000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-404000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-404000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-12-05 11:05:34.730701 -0800 PST m=+939.426737876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-404000 -n cert-expiration-404000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-404000 -n cert-expiration-404000: exit status 7 (72.172125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-404000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-404000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-404000
--- FAIL: TestCertExpiration (195.33s)

                                                
                                    
x
+
TestDockerFlags (10.35s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-345000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-345000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.10325125s)

                                                
                                                
-- stdout --
	* [docker-flags-345000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-345000" primary control-plane node in "docker-flags-345000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-345000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:02:14.387122   11035 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:02:14.387290   11035 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:02:14.387294   11035 out.go:358] Setting ErrFile to fd 2...
	I1205 11:02:14.387296   11035 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:02:14.387439   11035 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:02:14.388622   11035 out.go:352] Setting JSON to false
	I1205 11:02:14.406256   11035 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5506,"bootTime":1733419828,"procs":544,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 11:02:14.406338   11035 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:02:14.412460   11035 out.go:177] * [docker-flags-345000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:02:14.419470   11035 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 11:02:14.419523   11035 notify.go:220] Checking for updates...
	I1205 11:02:14.427462   11035 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 11:02:14.429023   11035 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:02:14.433432   11035 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:02:14.436438   11035 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	I1205 11:02:14.438058   11035 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:02:14.441825   11035 config.go:182] Loaded profile config "force-systemd-flag-527000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:02:14.441895   11035 config.go:182] Loaded profile config "multinode-454000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:02:14.441946   11035 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:02:14.446424   11035 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:02:14.452518   11035 start.go:297] selected driver: qemu2
	I1205 11:02:14.452526   11035 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:02:14.452533   11035 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:02:14.455050   11035 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:02:14.459455   11035 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:02:14.461108   11035 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1205 11:02:14.461127   11035 cni.go:84] Creating CNI manager for ""
	I1205 11:02:14.461160   11035 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:02:14.461165   11035 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 11:02:14.461200   11035 start.go:340] cluster config:
	{Name:docker-flags-345000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-345000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:02:14.465891   11035 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:02:14.474496   11035 out.go:177] * Starting "docker-flags-345000" primary control-plane node in "docker-flags-345000" cluster
	I1205 11:02:14.478401   11035 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:02:14.478416   11035 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:02:14.478438   11035 cache.go:56] Caching tarball of preloaded images
	I1205 11:02:14.478516   11035 preload.go:172] Found /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:02:14.478522   11035 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:02:14.478586   11035 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/docker-flags-345000/config.json ...
	I1205 11:02:14.478599   11035 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/docker-flags-345000/config.json: {Name:mkedbac9ae3fc3dd580647369e640fc50146cf6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:02:14.479125   11035 start.go:360] acquireMachinesLock for docker-flags-345000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:02:14.479177   11035 start.go:364] duration metric: took 45.208µs to acquireMachinesLock for "docker-flags-345000"
	I1205 11:02:14.479189   11035 start.go:93] Provisioning new machine with config: &{Name:docker-flags-345000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-345000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:02:14.479221   11035 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:02:14.487411   11035 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1205 11:02:14.504670   11035 start.go:159] libmachine.API.Create for "docker-flags-345000" (driver="qemu2")
	I1205 11:02:14.504705   11035 client.go:168] LocalClient.Create starting
	I1205 11:02:14.504791   11035 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 11:02:14.504830   11035 main.go:141] libmachine: Decoding PEM data...
	I1205 11:02:14.504840   11035 main.go:141] libmachine: Parsing certificate...
	I1205 11:02:14.504875   11035 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 11:02:14.504906   11035 main.go:141] libmachine: Decoding PEM data...
	I1205 11:02:14.504914   11035 main.go:141] libmachine: Parsing certificate...
	I1205 11:02:14.505405   11035 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:02:14.666551   11035 main.go:141] libmachine: Creating SSH key...
	I1205 11:02:14.819283   11035 main.go:141] libmachine: Creating Disk image...
	I1205 11:02:14.819290   11035 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:02:14.819508   11035 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/docker-flags-345000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/docker-flags-345000/disk.qcow2
	I1205 11:02:14.829774   11035 main.go:141] libmachine: STDOUT: 
	I1205 11:02:14.829793   11035 main.go:141] libmachine: STDERR: 
	I1205 11:02:14.829857   11035 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/docker-flags-345000/disk.qcow2 +20000M
	I1205 11:02:14.838302   11035 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:02:14.838316   11035 main.go:141] libmachine: STDERR: 
	I1205 11:02:14.838337   11035 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/docker-flags-345000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/docker-flags-345000/disk.qcow2
	I1205 11:02:14.838341   11035 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:02:14.838356   11035 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:02:14.838385   11035 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/docker-flags-345000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/docker-flags-345000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/docker-flags-345000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:c7:95:84:97:59 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/docker-flags-345000/disk.qcow2
	I1205 11:02:14.840198   11035 main.go:141] libmachine: STDOUT: 
	I1205 11:02:14.840211   11035 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:02:14.840230   11035 client.go:171] duration metric: took 335.521833ms to LocalClient.Create
	I1205 11:02:16.842445   11035 start.go:128] duration metric: took 2.363158042s to createHost
	I1205 11:02:16.842549   11035 start.go:83] releasing machines lock for "docker-flags-345000", held for 2.363341625s
	W1205 11:02:16.842594   11035 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:02:16.870793   11035 out.go:177] * Deleting "docker-flags-345000" in qemu2 ...
	W1205 11:02:16.894265   11035 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:02:16.894284   11035 start.go:729] Will try again in 5 seconds ...
	I1205 11:02:21.896483   11035 start.go:360] acquireMachinesLock for docker-flags-345000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:02:22.007170   11035 start.go:364] duration metric: took 110.5505ms to acquireMachinesLock for "docker-flags-345000"
	I1205 11:02:22.007261   11035 start.go:93] Provisioning new machine with config: &{Name:docker-flags-345000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-345000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:02:22.007547   11035 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:02:22.023021   11035 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1205 11:02:22.071723   11035 start.go:159] libmachine.API.Create for "docker-flags-345000" (driver="qemu2")
	I1205 11:02:22.071774   11035 client.go:168] LocalClient.Create starting
	I1205 11:02:22.071932   11035 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 11:02:22.072033   11035 main.go:141] libmachine: Decoding PEM data...
	I1205 11:02:22.072053   11035 main.go:141] libmachine: Parsing certificate...
	I1205 11:02:22.072133   11035 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 11:02:22.072193   11035 main.go:141] libmachine: Decoding PEM data...
	I1205 11:02:22.072207   11035 main.go:141] libmachine: Parsing certificate...
	I1205 11:02:22.072824   11035 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:02:22.248553   11035 main.go:141] libmachine: Creating SSH key...
	I1205 11:02:22.383698   11035 main.go:141] libmachine: Creating Disk image...
	I1205 11:02:22.383708   11035 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:02:22.383917   11035 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/docker-flags-345000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/docker-flags-345000/disk.qcow2
	I1205 11:02:22.393937   11035 main.go:141] libmachine: STDOUT: 
	I1205 11:02:22.393954   11035 main.go:141] libmachine: STDERR: 
	I1205 11:02:22.394016   11035 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/docker-flags-345000/disk.qcow2 +20000M
	I1205 11:02:22.402393   11035 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:02:22.402409   11035 main.go:141] libmachine: STDERR: 
	I1205 11:02:22.402423   11035 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/docker-flags-345000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/docker-flags-345000/disk.qcow2
	I1205 11:02:22.402428   11035 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:02:22.402439   11035 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:02:22.402465   11035 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/docker-flags-345000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/docker-flags-345000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/docker-flags-345000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:d4:35:4d:9b:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/docker-flags-345000/disk.qcow2
	I1205 11:02:22.404313   11035 main.go:141] libmachine: STDOUT: 
	I1205 11:02:22.404333   11035 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:02:22.404346   11035 client.go:171] duration metric: took 332.569958ms to LocalClient.Create
	I1205 11:02:24.406558   11035 start.go:128] duration metric: took 2.398994666s to createHost
	I1205 11:02:24.406654   11035 start.go:83] releasing machines lock for "docker-flags-345000", held for 2.39947425s
	W1205 11:02:24.406973   11035 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-345000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-345000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:02:24.423930   11035 out.go:201] 
	W1205 11:02:24.429981   11035 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:02:24.429999   11035 out.go:270] * 
	* 
	W1205 11:02:24.431510   11035 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:02:24.444680   11035 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-345000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-345000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-345000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (82.193334ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-345000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-345000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-345000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-345000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-345000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-345000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-345000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-345000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-345000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (45.936208ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-345000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-345000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-345000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-345000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-345000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-345000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-12-05 11:02:24.589661 -0800 PST m=+749.283689710
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-345000 -n docker-flags-345000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-345000 -n docker-flags-345000: exit status 7 (33.095417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-345000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-345000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-345000
--- FAIL: TestDockerFlags (10.35s)

                                                
                                    
x
+
TestForceSystemdFlag (10.27s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-527000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-527000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.071249375s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-527000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-527000" primary control-plane node in "force-systemd-flag-527000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-527000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:02:09.317629   11014 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:02:09.317774   11014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:02:09.317778   11014 out.go:358] Setting ErrFile to fd 2...
	I1205 11:02:09.317780   11014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:02:09.317914   11014 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:02:09.319019   11014 out.go:352] Setting JSON to false
	I1205 11:02:09.336481   11014 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5501,"bootTime":1733419828,"procs":544,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 11:02:09.336556   11014 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:02:09.340303   11014 out.go:177] * [force-systemd-flag-527000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:02:09.360351   11014 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 11:02:09.360375   11014 notify.go:220] Checking for updates...
	I1205 11:02:09.368181   11014 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 11:02:09.371240   11014 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:02:09.374246   11014 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:02:09.377232   11014 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	I1205 11:02:09.380221   11014 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:02:09.383539   11014 config.go:182] Loaded profile config "force-systemd-env-497000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:02:09.383620   11014 config.go:182] Loaded profile config "multinode-454000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:02:09.383671   11014 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:02:09.387156   11014 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:02:09.393178   11014 start.go:297] selected driver: qemu2
	I1205 11:02:09.393184   11014 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:02:09.393190   11014 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:02:09.395910   11014 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:02:09.399185   11014 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:02:09.403301   11014 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 11:02:09.403318   11014 cni.go:84] Creating CNI manager for ""
	I1205 11:02:09.403343   11014 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:02:09.403347   11014 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 11:02:09.403376   11014 start.go:340] cluster config:
	{Name:force-systemd-flag-527000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-527000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:02:09.408421   11014 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:02:09.416207   11014 out.go:177] * Starting "force-systemd-flag-527000" primary control-plane node in "force-systemd-flag-527000" cluster
	I1205 11:02:09.420014   11014 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:02:09.420032   11014 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:02:09.420054   11014 cache.go:56] Caching tarball of preloaded images
	I1205 11:02:09.420133   11014 preload.go:172] Found /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:02:09.420139   11014 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:02:09.420195   11014 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/force-systemd-flag-527000/config.json ...
	I1205 11:02:09.420206   11014 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/force-systemd-flag-527000/config.json: {Name:mk08fed0e0597ba1e92779379b0c9919dc96c785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:02:09.420757   11014 start.go:360] acquireMachinesLock for force-systemd-flag-527000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:02:09.420815   11014 start.go:364] duration metric: took 50.416µs to acquireMachinesLock for "force-systemd-flag-527000"
	I1205 11:02:09.420829   11014 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-527000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-527000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:02:09.420856   11014 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:02:09.429066   11014 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1205 11:02:09.448023   11014 start.go:159] libmachine.API.Create for "force-systemd-flag-527000" (driver="qemu2")
	I1205 11:02:09.448047   11014 client.go:168] LocalClient.Create starting
	I1205 11:02:09.448128   11014 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 11:02:09.448173   11014 main.go:141] libmachine: Decoding PEM data...
	I1205 11:02:09.448183   11014 main.go:141] libmachine: Parsing certificate...
	I1205 11:02:09.448221   11014 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 11:02:09.448254   11014 main.go:141] libmachine: Decoding PEM data...
	I1205 11:02:09.448263   11014 main.go:141] libmachine: Parsing certificate...
	I1205 11:02:09.448762   11014 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:02:09.609521   11014 main.go:141] libmachine: Creating SSH key...
	I1205 11:02:09.713728   11014 main.go:141] libmachine: Creating Disk image...
	I1205 11:02:09.713735   11014 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:02:09.713954   11014 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/force-systemd-flag-527000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/force-systemd-flag-527000/disk.qcow2
	I1205 11:02:09.724068   11014 main.go:141] libmachine: STDOUT: 
	I1205 11:02:09.724089   11014 main.go:141] libmachine: STDERR: 
	I1205 11:02:09.724154   11014 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/force-systemd-flag-527000/disk.qcow2 +20000M
	I1205 11:02:09.732586   11014 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:02:09.732599   11014 main.go:141] libmachine: STDERR: 
	I1205 11:02:09.732623   11014 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/force-systemd-flag-527000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/force-systemd-flag-527000/disk.qcow2
	I1205 11:02:09.732631   11014 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:02:09.732642   11014 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:02:09.732670   11014 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/force-systemd-flag-527000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/force-systemd-flag-527000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/force-systemd-flag-527000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:17:5f:75:94:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/force-systemd-flag-527000/disk.qcow2
	I1205 11:02:09.734466   11014 main.go:141] libmachine: STDOUT: 
	I1205 11:02:09.734479   11014 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:02:09.734505   11014 client.go:171] duration metric: took 286.45525ms to LocalClient.Create
	I1205 11:02:11.736669   11014 start.go:128] duration metric: took 2.315814958s to createHost
	I1205 11:02:11.736727   11014 start.go:83] releasing machines lock for "force-systemd-flag-527000", held for 2.315926875s
	W1205 11:02:11.736785   11014 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:02:11.754164   11014 out.go:177] * Deleting "force-systemd-flag-527000" in qemu2 ...
	W1205 11:02:11.780064   11014 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:02:11.780113   11014 start.go:729] Will try again in 5 seconds ...
	I1205 11:02:16.782237   11014 start.go:360] acquireMachinesLock for force-systemd-flag-527000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:02:16.842670   11014 start.go:364] duration metric: took 60.309791ms to acquireMachinesLock for "force-systemd-flag-527000"
	I1205 11:02:16.842793   11014 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-527000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-527000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:02:16.843078   11014 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:02:16.858714   11014 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1205 11:02:16.904864   11014 start.go:159] libmachine.API.Create for "force-systemd-flag-527000" (driver="qemu2")
	I1205 11:02:16.904909   11014 client.go:168] LocalClient.Create starting
	I1205 11:02:16.905052   11014 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 11:02:16.905136   11014 main.go:141] libmachine: Decoding PEM data...
	I1205 11:02:16.905153   11014 main.go:141] libmachine: Parsing certificate...
	I1205 11:02:16.905211   11014 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 11:02:16.905270   11014 main.go:141] libmachine: Decoding PEM data...
	I1205 11:02:16.905286   11014 main.go:141] libmachine: Parsing certificate...
	I1205 11:02:16.906012   11014 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:02:17.114074   11014 main.go:141] libmachine: Creating SSH key...
	I1205 11:02:17.273831   11014 main.go:141] libmachine: Creating Disk image...
	I1205 11:02:17.273846   11014 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:02:17.274081   11014 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/force-systemd-flag-527000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/force-systemd-flag-527000/disk.qcow2
	I1205 11:02:17.284564   11014 main.go:141] libmachine: STDOUT: 
	I1205 11:02:17.284588   11014 main.go:141] libmachine: STDERR: 
	I1205 11:02:17.284663   11014 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/force-systemd-flag-527000/disk.qcow2 +20000M
	I1205 11:02:17.293110   11014 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:02:17.293126   11014 main.go:141] libmachine: STDERR: 
	I1205 11:02:17.293141   11014 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/force-systemd-flag-527000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/force-systemd-flag-527000/disk.qcow2
	I1205 11:02:17.293145   11014 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:02:17.293154   11014 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:02:17.293203   11014 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/force-systemd-flag-527000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/force-systemd-flag-527000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/force-systemd-flag-527000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:c8:cb:45:46:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/force-systemd-flag-527000/disk.qcow2
	I1205 11:02:17.295065   11014 main.go:141] libmachine: STDOUT: 
	I1205 11:02:17.295075   11014 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:02:17.295087   11014 client.go:171] duration metric: took 390.177292ms to LocalClient.Create
	I1205 11:02:19.297237   11014 start.go:128] duration metric: took 2.454144875s to createHost
	I1205 11:02:19.297373   11014 start.go:83] releasing machines lock for "force-systemd-flag-527000", held for 2.454626375s
	W1205 11:02:19.297729   11014 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-527000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-527000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:02:19.312384   11014 out.go:201] 
	W1205 11:02:19.326690   11014 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:02:19.326743   11014 out.go:270] * 
	* 
	W1205 11:02:19.329276   11014 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:02:19.341334   11014 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-527000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-527000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-527000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (81.783541ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-527000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-527000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-527000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-12-05 11:02:19.441448 -0800 PST m=+744.135422043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-527000 -n force-systemd-flag-527000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-527000 -n force-systemd-flag-527000: exit status 7 (36.733542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-527000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-527000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-527000
--- FAIL: TestForceSystemdFlag (10.27s)

                                                
                                    
x
+
TestForceSystemdEnv (10.08s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-497000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
I1205 11:02:04.464238    9136 install.go:79] stdout: 
W1205 11:02:04.464387    9136 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2820100655/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2820100655/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1205 11:02:04.464409    9136 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2820100655/001/docker-machine-driver-hyperkit]
I1205 11:02:04.481801    9136 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2820100655/001/docker-machine-driver-hyperkit]
I1205 11:02:04.496480    9136 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2820100655/001/docker-machine-driver-hyperkit]
I1205 11:02:04.510388    9136 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2820100655/001/docker-machine-driver-hyperkit]
I1205 11:02:04.538445    9136 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1205 11:02:04.538621    9136 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
I1205 11:02:06.333250    9136 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W1205 11:02:06.333275    9136 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W1205 11:02:06.333332    9136 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1205 11:02:06.333367    9136 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2820100655/002/docker-machine-driver-hyperkit
I1205 11:02:06.721284    9136 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2820100655/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x1076696e0 0x1076696e0 0x1076696e0 0x1076696e0 0x1076696e0 0x1076696e0 0x1076696e0] Decompressors:map[bz2:0x1400000ef78 gz:0x1400000f070 tar:0x1400000f020 tar.bz2:0x1400000f030 tar.gz:0x1400000f040 tar.xz:0x1400000f050 tar.zst:0x1400000f060 tbz2:0x1400000f030 tgz:0x1400000f040 txz:0x1400000f050 tzst:0x1400000f060 xz:0x1400000f078 zip:0x1400000f080 zst:0x1400000f090] Getters:map[file:0x140007e4b30 http:0x14000490910 https:0x14000490960] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1205 11:02:06.721395    9136 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2820100655/002/docker-machine-driver-hyperkit
I1205 11:02:09.233326    9136 install.go:79] stdout: 
W1205 11:02:09.233539    9136 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2820100655/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2820100655/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1205 11:02:09.233565    9136 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2820100655/002/docker-machine-driver-hyperkit]
I1205 11:02:09.250217    9136 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2820100655/002/docker-machine-driver-hyperkit]
I1205 11:02:09.263614    9136 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2820100655/002/docker-machine-driver-hyperkit]
I1205 11:02:09.274320    9136 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2820100655/002/docker-machine-driver-hyperkit]
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-497000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.880339292s)

                                                
                                                
-- stdout --
	* [force-systemd-env-497000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-497000" primary control-plane node in "force-systemd-env-497000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-497000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:02:04.306485   10982 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:02:04.306637   10982 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:02:04.306640   10982 out.go:358] Setting ErrFile to fd 2...
	I1205 11:02:04.306642   10982 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:02:04.306771   10982 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:02:04.307989   10982 out.go:352] Setting JSON to false
	I1205 11:02:04.326836   10982 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5496,"bootTime":1733419828,"procs":543,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 11:02:04.326905   10982 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:02:04.332601   10982 out.go:177] * [force-systemd-env-497000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:02:04.340603   10982 notify.go:220] Checking for updates...
	I1205 11:02:04.344584   10982 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 11:02:04.351519   10982 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 11:02:04.358569   10982 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:02:04.366547   10982 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:02:04.375614   10982 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	I1205 11:02:04.382533   10982 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1205 11:02:04.387947   10982 config.go:182] Loaded profile config "multinode-454000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:02:04.387998   10982 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:02:04.392587   10982 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:02:04.399569   10982 start.go:297] selected driver: qemu2
	I1205 11:02:04.399574   10982 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:02:04.399579   10982 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:02:04.402139   10982 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:02:04.405581   10982 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:02:04.409645   10982 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 11:02:04.409660   10982 cni.go:84] Creating CNI manager for ""
	I1205 11:02:04.409682   10982 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:02:04.409695   10982 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 11:02:04.409732   10982 start.go:340] cluster config:
	{Name:force-systemd-env-497000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-497000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:02:04.414718   10982 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:02:04.421559   10982 out.go:177] * Starting "force-systemd-env-497000" primary control-plane node in "force-systemd-env-497000" cluster
	I1205 11:02:04.424613   10982 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:02:04.424635   10982 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:02:04.424655   10982 cache.go:56] Caching tarball of preloaded images
	I1205 11:02:04.424736   10982 preload.go:172] Found /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:02:04.424741   10982 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:02:04.424807   10982 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/force-systemd-env-497000/config.json ...
	I1205 11:02:04.424818   10982 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/force-systemd-env-497000/config.json: {Name:mk361b441ac0f2c838aa2198ac09ec861e86a294 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:02:04.425075   10982 start.go:360] acquireMachinesLock for force-systemd-env-497000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:02:04.425124   10982 start.go:364] duration metric: took 41.958µs to acquireMachinesLock for "force-systemd-env-497000"
	I1205 11:02:04.425136   10982 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-497000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-497000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:02:04.425160   10982 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:02:04.433605   10982 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1205 11:02:04.450359   10982 start.go:159] libmachine.API.Create for "force-systemd-env-497000" (driver="qemu2")
	I1205 11:02:04.450391   10982 client.go:168] LocalClient.Create starting
	I1205 11:02:04.450489   10982 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 11:02:04.450532   10982 main.go:141] libmachine: Decoding PEM data...
	I1205 11:02:04.450543   10982 main.go:141] libmachine: Parsing certificate...
	I1205 11:02:04.450582   10982 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 11:02:04.450614   10982 main.go:141] libmachine: Decoding PEM data...
	I1205 11:02:04.450625   10982 main.go:141] libmachine: Parsing certificate...
	I1205 11:02:04.451011   10982 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:02:04.614306   10982 main.go:141] libmachine: Creating SSH key...
	I1205 11:02:04.732985   10982 main.go:141] libmachine: Creating Disk image...
	I1205 11:02:04.732999   10982 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:02:04.733244   10982 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/force-systemd-env-497000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/force-systemd-env-497000/disk.qcow2
	I1205 11:02:04.743353   10982 main.go:141] libmachine: STDOUT: 
	I1205 11:02:04.743386   10982 main.go:141] libmachine: STDERR: 
	I1205 11:02:04.743454   10982 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/force-systemd-env-497000/disk.qcow2 +20000M
	I1205 11:02:04.752286   10982 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:02:04.752301   10982 main.go:141] libmachine: STDERR: 
	I1205 11:02:04.752317   10982 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/force-systemd-env-497000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/force-systemd-env-497000/disk.qcow2
	I1205 11:02:04.752322   10982 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:02:04.752333   10982 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:02:04.752363   10982 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/force-systemd-env-497000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/force-systemd-env-497000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/force-systemd-env-497000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:61:2d:38:be:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/force-systemd-env-497000/disk.qcow2
	I1205 11:02:04.754290   10982 main.go:141] libmachine: STDOUT: 
	I1205 11:02:04.754308   10982 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:02:04.754328   10982 client.go:171] duration metric: took 303.934208ms to LocalClient.Create
	I1205 11:02:06.756547   10982 start.go:128] duration metric: took 2.33137775s to createHost
	I1205 11:02:06.756615   10982 start.go:83] releasing machines lock for "force-systemd-env-497000", held for 2.331505458s
	W1205 11:02:06.756671   10982 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:02:06.773798   10982 out.go:177] * Deleting "force-systemd-env-497000" in qemu2 ...
	W1205 11:02:06.800839   10982 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:02:06.800870   10982 start.go:729] Will try again in 5 seconds ...
	I1205 11:02:11.803021   10982 start.go:360] acquireMachinesLock for force-systemd-env-497000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:02:11.803388   10982 start.go:364] duration metric: took 264.542µs to acquireMachinesLock for "force-systemd-env-497000"
	I1205 11:02:11.803472   10982 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-497000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-497000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:02:11.803686   10982 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:02:11.811043   10982 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1205 11:02:11.853507   10982 start.go:159] libmachine.API.Create for "force-systemd-env-497000" (driver="qemu2")
	I1205 11:02:11.853573   10982 client.go:168] LocalClient.Create starting
	I1205 11:02:11.853685   10982 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 11:02:11.853748   10982 main.go:141] libmachine: Decoding PEM data...
	I1205 11:02:11.853765   10982 main.go:141] libmachine: Parsing certificate...
	I1205 11:02:11.853825   10982 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 11:02:11.853881   10982 main.go:141] libmachine: Decoding PEM data...
	I1205 11:02:11.853895   10982 main.go:141] libmachine: Parsing certificate...
	I1205 11:02:11.854645   10982 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:02:12.031260   10982 main.go:141] libmachine: Creating SSH key...
	I1205 11:02:12.082668   10982 main.go:141] libmachine: Creating Disk image...
	I1205 11:02:12.082673   10982 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:02:12.082862   10982 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/force-systemd-env-497000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/force-systemd-env-497000/disk.qcow2
	I1205 11:02:12.092679   10982 main.go:141] libmachine: STDOUT: 
	I1205 11:02:12.092701   10982 main.go:141] libmachine: STDERR: 
	I1205 11:02:12.092768   10982 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/force-systemd-env-497000/disk.qcow2 +20000M
	I1205 11:02:12.101198   10982 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:02:12.101211   10982 main.go:141] libmachine: STDERR: 
	I1205 11:02:12.101228   10982 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/force-systemd-env-497000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/force-systemd-env-497000/disk.qcow2
	I1205 11:02:12.101235   10982 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:02:12.101243   10982 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:02:12.101271   10982 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/force-systemd-env-497000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/force-systemd-env-497000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/force-systemd-env-497000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:e8:6e:a8:a7:38 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/force-systemd-env-497000/disk.qcow2
	I1205 11:02:12.103073   10982 main.go:141] libmachine: STDOUT: 
	I1205 11:02:12.103094   10982 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:02:12.103107   10982 client.go:171] duration metric: took 249.531666ms to LocalClient.Create
	I1205 11:02:14.105251   10982 start.go:128] duration metric: took 2.30155475s to createHost
	I1205 11:02:14.105312   10982 start.go:83] releasing machines lock for "force-systemd-env-497000", held for 2.301928s
	W1205 11:02:14.105769   10982 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-497000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-497000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:02:14.120470   10982 out.go:201] 
	W1205 11:02:14.124528   10982 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:02:14.124556   10982 out.go:270] * 
	* 
	W1205 11:02:14.127011   10982 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:02:14.138450   10982 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-497000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-497000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-497000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (80.781208ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-497000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-497000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-497000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-12-05 11:02:14.236539 -0800 PST m=+738.930458460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-497000 -n force-systemd-env-497000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-497000 -n force-systemd-env-497000: exit status 7 (36.092583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-497000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-497000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-497000
--- FAIL: TestForceSystemdEnv (10.08s)

                                                
                                    
x
+
TestErrorSpam/setup (9.85s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-846000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-846000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 --driver=qemu2 : exit status 80 (9.851668167s)

                                                
                                                
-- stdout --
	* [nospam-846000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-846000" primary control-plane node in "nospam-846000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-846000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-846000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-846000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-846000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-846000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
- MINIKUBE_LOCATION=20052
- KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-846000" primary control-plane node in "nospam-846000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-846000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-846000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.85s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (10.01s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-606000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-606000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.929363458s)

                                                
                                                
-- stdout --
	* [functional-606000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-606000" primary control-plane node in "functional-606000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-606000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51585 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51585 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51585 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-606000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2236: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-606000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2241: start stdout=* [functional-606000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
- MINIKUBE_LOCATION=20052
- KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-606000" primary control-plane node in "functional-606000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-606000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2246: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:51585 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:51585 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:51585 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-606000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-606000 -n functional-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-606000 -n functional-606000: exit status 7 (75.029542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (10.01s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.27s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1205 10:51:06.665206    9136 config.go:182] Loaded profile config "functional-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-606000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-606000 --alsologtostderr -v=8: exit status 80 (5.191842375s)

                                                
                                                
-- stdout --
	* [functional-606000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-606000" primary control-plane node in "functional-606000" cluster
	* Restarting existing qemu2 VM for "functional-606000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-606000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 10:51:06.698616    9351 out.go:345] Setting OutFile to fd 1 ...
	I1205 10:51:06.698773    9351 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:51:06.698776    9351 out.go:358] Setting ErrFile to fd 2...
	I1205 10:51:06.698779    9351 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:51:06.698902    9351 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 10:51:06.700029    9351 out.go:352] Setting JSON to false
	I1205 10:51:06.717732    9351 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4838,"bootTime":1733419828,"procs":539,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 10:51:06.717806    9351 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 10:51:06.721737    9351 out.go:177] * [functional-606000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 10:51:06.727639    9351 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 10:51:06.727680    9351 notify.go:220] Checking for updates...
	I1205 10:51:06.734590    9351 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 10:51:06.738647    9351 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 10:51:06.741646    9351 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 10:51:06.744559    9351 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	I1205 10:51:06.747620    9351 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 10:51:06.750955    9351 config.go:182] Loaded profile config "functional-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 10:51:06.751010    9351 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 10:51:06.755633    9351 out.go:177] * Using the qemu2 driver based on existing profile
	I1205 10:51:06.762594    9351 start.go:297] selected driver: qemu2
	I1205 10:51:06.762599    9351 start.go:901] validating driver "qemu2" against &{Name:functional-606000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-606000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 10:51:06.762650    9351 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 10:51:06.765118    9351 cni.go:84] Creating CNI manager for ""
	I1205 10:51:06.765158    9351 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 10:51:06.765206    9351 start.go:340] cluster config:
	{Name:functional-606000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-606000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 10:51:06.769627    9351 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 10:51:06.776600    9351 out.go:177] * Starting "functional-606000" primary control-plane node in "functional-606000" cluster
	I1205 10:51:06.780650    9351 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 10:51:06.780667    9351 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 10:51:06.780680    9351 cache.go:56] Caching tarball of preloaded images
	I1205 10:51:06.780754    9351 preload.go:172] Found /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 10:51:06.780759    9351 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 10:51:06.780815    9351 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/functional-606000/config.json ...
	I1205 10:51:06.781408    9351 start.go:360] acquireMachinesLock for functional-606000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 10:51:06.781446    9351 start.go:364] duration metric: took 31.75µs to acquireMachinesLock for "functional-606000"
	I1205 10:51:06.781455    9351 start.go:96] Skipping create...Using existing machine configuration
	I1205 10:51:06.781460    9351 fix.go:54] fixHost starting: 
	I1205 10:51:06.781580    9351 fix.go:112] recreateIfNeeded on functional-606000: state=Stopped err=<nil>
	W1205 10:51:06.781587    9351 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 10:51:06.789624    9351 out.go:177] * Restarting existing qemu2 VM for "functional-606000" ...
	I1205 10:51:06.793580    9351 qemu.go:418] Using hvf for hardware acceleration
	I1205 10:51:06.793616    9351 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/functional-606000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/functional-606000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/functional-606000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:da:cf:8f:cc:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/functional-606000/disk.qcow2
	I1205 10:51:06.795746    9351 main.go:141] libmachine: STDOUT: 
	I1205 10:51:06.795764    9351 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 10:51:06.795793    9351 fix.go:56] duration metric: took 14.33325ms for fixHost
	I1205 10:51:06.795798    9351 start.go:83] releasing machines lock for "functional-606000", held for 14.347709ms
	W1205 10:51:06.795802    9351 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 10:51:06.795834    9351 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 10:51:06.795838    9351 start.go:729] Will try again in 5 seconds ...
	I1205 10:51:11.797975    9351 start.go:360] acquireMachinesLock for functional-606000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 10:51:11.798392    9351 start.go:364] duration metric: took 345.666µs to acquireMachinesLock for "functional-606000"
	I1205 10:51:11.798534    9351 start.go:96] Skipping create...Using existing machine configuration
	I1205 10:51:11.798555    9351 fix.go:54] fixHost starting: 
	I1205 10:51:11.799305    9351 fix.go:112] recreateIfNeeded on functional-606000: state=Stopped err=<nil>
	W1205 10:51:11.799331    9351 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 10:51:11.807848    9351 out.go:177] * Restarting existing qemu2 VM for "functional-606000" ...
	I1205 10:51:11.811809    9351 qemu.go:418] Using hvf for hardware acceleration
	I1205 10:51:11.811959    9351 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/functional-606000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/functional-606000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/functional-606000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:da:cf:8f:cc:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/functional-606000/disk.qcow2
	I1205 10:51:11.822329    9351 main.go:141] libmachine: STDOUT: 
	I1205 10:51:11.822381    9351 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 10:51:11.822479    9351 fix.go:56] duration metric: took 23.927667ms for fixHost
	I1205 10:51:11.822493    9351 start.go:83] releasing machines lock for "functional-606000", held for 24.080375ms
	W1205 10:51:11.822652    9351 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-606000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-606000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 10:51:11.829784    9351 out.go:201] 
	W1205 10:51:11.833879    9351 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 10:51:11.833903    9351 out.go:270] * 
	* 
	W1205 10:51:11.836438    9351 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 10:51:11.844908    9351 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:661: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-606000 --alsologtostderr -v=8": exit status 80
functional_test.go:663: soft start took 5.193458459s for "functional-606000" cluster.
I1205 10:51:11.859040    9136 config.go:182] Loaded profile config "functional-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-606000 -n functional-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-606000 -n functional-606000: exit status 7 (74.154625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.27s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
functional_test.go:681: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (28.617666ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:683: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:687: expected current-context = "functional-606000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-606000 -n functional-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-606000 -n functional-606000: exit status 7 (35.067916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-606000 get po -A
functional_test.go:696: (dbg) Non-zero exit: kubectl --context functional-606000 get po -A: exit status 1 (27.086667ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-606000

                                                
                                                
** /stderr **
functional_test.go:698: failed to get kubectl pods: args "kubectl --context functional-606000 get po -A" : exit status 1
functional_test.go:702: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-606000\n"*: args "kubectl --context functional-606000 get po -A"
functional_test.go:705: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-606000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-606000 -n functional-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-606000 -n functional-606000: exit status 7 (34.67225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 ssh sudo crictl images
functional_test.go:1124: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 ssh sudo crictl images: exit status 83 (46.801834ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
functional_test.go:1126: failed to get images by "out/minikube-darwin-arm64 -p functional-606000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1130: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1147: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (45.996666ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
functional_test.go:1150: failed to manually delete image "out/minikube-darwin-arm64 -p functional-606000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (43.781542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1163: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (44.965958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
functional_test.go:1165: expected "out/minikube-darwin-arm64 -p functional-606000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 kubectl -- --context functional-606000 get pods
functional_test.go:716: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 kubectl -- --context functional-606000 get pods: exit status 1 (710.868666ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-606000
	* no server found for cluster "functional-606000"

                                                
                                                
** /stderr **
functional_test.go:719: failed to get pods. args "out/minikube-darwin-arm64 -p functional-606000 kubectl -- --context functional-606000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-606000 -n functional-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-606000 -n functional-606000: exit status 7 (36.308459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.75s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-606000 get pods
functional_test.go:741: (dbg) Non-zero exit: out/kubectl --context functional-606000 get pods: exit status 1 (1.172643459s)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-606000
	* no server found for cluster "functional-606000"

                                                
                                                
** /stderr **
functional_test.go:744: failed to run kubectl directly. args "out/kubectl --context functional-606000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-606000 -n functional-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-606000 -n functional-606000: exit status 7 (35.11525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.21s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.27s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-606000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-606000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.199247208s)

                                                
                                                
-- stdout --
	* [functional-606000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-606000" primary control-plane node in "functional-606000" cluster
	* Restarting existing qemu2 VM for "functional-606000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-606000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-606000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:759: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-606000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:761: restart took 5.200367416s for "functional-606000" cluster.
I1205 10:51:22.695695    9136 config.go:182] Loaded profile config "functional-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-606000 -n functional-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-606000 -n functional-606000: exit status 7 (73.056625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.27s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-606000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:810: (dbg) Non-zero exit: kubectl --context functional-606000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (28.617792ms)

                                                
                                                
** stderr ** 
	error: context "functional-606000" does not exist

                                                
                                                
** /stderr **
functional_test.go:812: failed to get components. args "kubectl --context functional-606000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-606000 -n functional-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-606000 -n functional-606000: exit status 7 (34.916458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 logs
functional_test.go:1236: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 logs: exit status 83 (82.662041ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-751000 | jenkins | v1.34.0 | 05 Dec 24 10:49 PST |                     |
	|         | -p download-only-751000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 05 Dec 24 10:50 PST | 05 Dec 24 10:50 PST |
	| delete  | -p download-only-751000                                                  | download-only-751000 | jenkins | v1.34.0 | 05 Dec 24 10:50 PST | 05 Dec 24 10:50 PST |
	| start   | -o=json --download-only                                                  | download-only-386000 | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
	|         | -p download-only-386000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 05 Dec 24 10:50 PST | 05 Dec 24 10:50 PST |
	| delete  | -p download-only-386000                                                  | download-only-386000 | jenkins | v1.34.0 | 05 Dec 24 10:50 PST | 05 Dec 24 10:50 PST |
	| delete  | -p download-only-751000                                                  | download-only-751000 | jenkins | v1.34.0 | 05 Dec 24 10:50 PST | 05 Dec 24 10:50 PST |
	| delete  | -p download-only-386000                                                  | download-only-386000 | jenkins | v1.34.0 | 05 Dec 24 10:50 PST | 05 Dec 24 10:50 PST |
	| start   | --download-only -p                                                       | binary-mirror-193000 | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
	|         | binary-mirror-193000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:51554                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-193000                                                  | binary-mirror-193000 | jenkins | v1.34.0 | 05 Dec 24 10:50 PST | 05 Dec 24 10:50 PST |
	| addons  | enable dashboard -p                                                      | addons-904000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
	|         | addons-904000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-904000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
	|         | addons-904000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-904000 --wait=true                                             | addons-904000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	| delete  | -p addons-904000                                                         | addons-904000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST | 05 Dec 24 10:50 PST |
	| start   | -p nospam-846000 -n=1 --memory=2250 --wait=false                         | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-846000 --log_dir                                                  | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-846000 --log_dir                                                  | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-846000 --log_dir                                                  | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-846000 --log_dir                                                  | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-846000 --log_dir                                                  | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-846000 --log_dir                                                  | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-846000 --log_dir                                                  | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-846000 --log_dir                                                  | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-846000 --log_dir                                                  | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-846000 --log_dir                                                  | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST | 05 Dec 24 10:50 PST |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-846000 --log_dir                                                  | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST | 05 Dec 24 10:50 PST |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-846000 --log_dir                                                  | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST | 05 Dec 24 10:50 PST |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-846000                                                         | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST | 05 Dec 24 10:50 PST |
	| start   | -p functional-606000                                                     | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-606000                                                     | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:51 PST |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-606000 cache add                                              | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:51 PST | 05 Dec 24 10:51 PST |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-606000 cache add                                              | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:51 PST | 05 Dec 24 10:51 PST |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-606000 cache add                                              | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:51 PST | 05 Dec 24 10:51 PST |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-606000 cache add                                              | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:51 PST | 05 Dec 24 10:51 PST |
	|         | minikube-local-cache-test:functional-606000                              |                      |         |         |                     |                     |
	| cache   | functional-606000 cache delete                                           | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:51 PST | 05 Dec 24 10:51 PST |
	|         | minikube-local-cache-test:functional-606000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 05 Dec 24 10:51 PST | 05 Dec 24 10:51 PST |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 05 Dec 24 10:51 PST | 05 Dec 24 10:51 PST |
	| ssh     | functional-606000 ssh sudo                                               | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:51 PST |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-606000                                                        | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:51 PST |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-606000 ssh                                                    | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:51 PST |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-606000 cache reload                                           | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:51 PST | 05 Dec 24 10:51 PST |
	| ssh     | functional-606000 ssh                                                    | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:51 PST |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 05 Dec 24 10:51 PST | 05 Dec 24 10:51 PST |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 05 Dec 24 10:51 PST | 05 Dec 24 10:51 PST |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-606000 kubectl --                                             | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:51 PST |                     |
	|         | --context functional-606000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-606000                                                     | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:51 PST |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 10:51:17
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.2 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 10:51:17.527150    9426 out.go:345] Setting OutFile to fd 1 ...
	I1205 10:51:17.527331    9426 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:51:17.527333    9426 out.go:358] Setting ErrFile to fd 2...
	I1205 10:51:17.527335    9426 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:51:17.527449    9426 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 10:51:17.528475    9426 out.go:352] Setting JSON to false
	I1205 10:51:17.546243    9426 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4849,"bootTime":1733419828,"procs":537,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 10:51:17.546313    9426 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 10:51:17.552746    9426 out.go:177] * [functional-606000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 10:51:17.562682    9426 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 10:51:17.562715    9426 notify.go:220] Checking for updates...
	I1205 10:51:17.571632    9426 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 10:51:17.575669    9426 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 10:51:17.578618    9426 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 10:51:17.581635    9426 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	I1205 10:51:17.584641    9426 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 10:51:17.587855    9426 config.go:182] Loaded profile config "functional-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 10:51:17.587917    9426 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 10:51:17.592674    9426 out.go:177] * Using the qemu2 driver based on existing profile
	I1205 10:51:17.599564    9426 start.go:297] selected driver: qemu2
	I1205 10:51:17.599568    9426 start.go:901] validating driver "qemu2" against &{Name:functional-606000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-606000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 10:51:17.599613    9426 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 10:51:17.602246    9426 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 10:51:17.602271    9426 cni.go:84] Creating CNI manager for ""
	I1205 10:51:17.602296    9426 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 10:51:17.602364    9426 start.go:340] cluster config:
	{Name:functional-606000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-606000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 10:51:17.606968    9426 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 10:51:17.613655    9426 out.go:177] * Starting "functional-606000" primary control-plane node in "functional-606000" cluster
	I1205 10:51:17.617707    9426 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 10:51:17.617720    9426 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 10:51:17.617733    9426 cache.go:56] Caching tarball of preloaded images
	I1205 10:51:17.617826    9426 preload.go:172] Found /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 10:51:17.617830    9426 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 10:51:17.617895    9426 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/functional-606000/config.json ...
	I1205 10:51:17.618434    9426 start.go:360] acquireMachinesLock for functional-606000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 10:51:17.618482    9426 start.go:364] duration metric: took 43.084µs to acquireMachinesLock for "functional-606000"
	I1205 10:51:17.618489    9426 start.go:96] Skipping create...Using existing machine configuration
	I1205 10:51:17.618493    9426 fix.go:54] fixHost starting: 
	I1205 10:51:17.618619    9426 fix.go:112] recreateIfNeeded on functional-606000: state=Stopped err=<nil>
	W1205 10:51:17.618625    9426 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 10:51:17.621607    9426 out.go:177] * Restarting existing qemu2 VM for "functional-606000" ...
	I1205 10:51:17.629641    9426 qemu.go:418] Using hvf for hardware acceleration
	I1205 10:51:17.629675    9426 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/functional-606000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/functional-606000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/functional-606000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:da:cf:8f:cc:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/functional-606000/disk.qcow2
	I1205 10:51:17.631978    9426 main.go:141] libmachine: STDOUT: 
	I1205 10:51:17.631993    9426 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 10:51:17.632025    9426 fix.go:56] duration metric: took 13.532708ms for fixHost
	I1205 10:51:17.632028    9426 start.go:83] releasing machines lock for "functional-606000", held for 13.54375ms
	W1205 10:51:17.632033    9426 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 10:51:17.632065    9426 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 10:51:17.632069    9426 start.go:729] Will try again in 5 seconds ...
	I1205 10:51:22.633693    9426 start.go:360] acquireMachinesLock for functional-606000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 10:51:22.634096    9426 start.go:364] duration metric: took 340.083µs to acquireMachinesLock for "functional-606000"
	I1205 10:51:22.634274    9426 start.go:96] Skipping create...Using existing machine configuration
	I1205 10:51:22.634289    9426 fix.go:54] fixHost starting: 
	I1205 10:51:22.635006    9426 fix.go:112] recreateIfNeeded on functional-606000: state=Stopped err=<nil>
	W1205 10:51:22.635027    9426 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 10:51:22.644542    9426 out.go:177] * Restarting existing qemu2 VM for "functional-606000" ...
	I1205 10:51:22.648503    9426 qemu.go:418] Using hvf for hardware acceleration
	I1205 10:51:22.648655    9426 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/functional-606000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/functional-606000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/functional-606000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:da:cf:8f:cc:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/functional-606000/disk.qcow2
	I1205 10:51:22.658341    9426 main.go:141] libmachine: STDOUT: 
	I1205 10:51:22.658426    9426 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 10:51:22.658496    9426 fix.go:56] duration metric: took 24.209125ms for fixHost
	I1205 10:51:22.658508    9426 start.go:83] releasing machines lock for "functional-606000", held for 24.357583ms
	W1205 10:51:22.658677    9426 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-606000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 10:51:22.667511    9426 out.go:201] 
	W1205 10:51:22.671611    9426 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 10:51:22.671633    9426 out.go:270] * 
	W1205 10:51:22.674251    9426 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 10:51:22.680601    9426 out.go:201] 
	
	
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
functional_test.go:1238: out/minikube-darwin-arm64 -p functional-606000 logs failed: exit status 83
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-751000 | jenkins | v1.34.0 | 05 Dec 24 10:49 PST |                     |
|         | -p download-only-751000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 05 Dec 24 10:50 PST | 05 Dec 24 10:50 PST |
| delete  | -p download-only-751000                                                  | download-only-751000 | jenkins | v1.34.0 | 05 Dec 24 10:50 PST | 05 Dec 24 10:50 PST |
| start   | -o=json --download-only                                                  | download-only-386000 | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
|         | -p download-only-386000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.2                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 05 Dec 24 10:50 PST | 05 Dec 24 10:50 PST |
| delete  | -p download-only-386000                                                  | download-only-386000 | jenkins | v1.34.0 | 05 Dec 24 10:50 PST | 05 Dec 24 10:50 PST |
| delete  | -p download-only-751000                                                  | download-only-751000 | jenkins | v1.34.0 | 05 Dec 24 10:50 PST | 05 Dec 24 10:50 PST |
| delete  | -p download-only-386000                                                  | download-only-386000 | jenkins | v1.34.0 | 05 Dec 24 10:50 PST | 05 Dec 24 10:50 PST |
| start   | --download-only -p                                                       | binary-mirror-193000 | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
|         | binary-mirror-193000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51554                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-193000                                                  | binary-mirror-193000 | jenkins | v1.34.0 | 05 Dec 24 10:50 PST | 05 Dec 24 10:50 PST |
| addons  | enable dashboard -p                                                      | addons-904000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
|         | addons-904000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-904000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
|         | addons-904000                                                            |                      |         |         |                     |                     |
| start   | -p addons-904000 --wait=true                                             | addons-904000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --addons=amd-gpu-device-plugin                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
| delete  | -p addons-904000                                                         | addons-904000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST | 05 Dec 24 10:50 PST |
| start   | -p nospam-846000 -n=1 --memory=2250 --wait=false                         | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-846000 --log_dir                                                  | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-846000 --log_dir                                                  | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-846000 --log_dir                                                  | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-846000 --log_dir                                                  | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-846000 --log_dir                                                  | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-846000 --log_dir                                                  | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-846000 --log_dir                                                  | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-846000 --log_dir                                                  | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-846000 --log_dir                                                  | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-846000 --log_dir                                                  | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST | 05 Dec 24 10:50 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-846000 --log_dir                                                  | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST | 05 Dec 24 10:50 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-846000 --log_dir                                                  | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST | 05 Dec 24 10:50 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-846000                                                         | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST | 05 Dec 24 10:50 PST |
| start   | -p functional-606000                                                     | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-606000                                                     | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:51 PST |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-606000 cache add                                              | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:51 PST | 05 Dec 24 10:51 PST |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-606000 cache add                                              | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:51 PST | 05 Dec 24 10:51 PST |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-606000 cache add                                              | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:51 PST | 05 Dec 24 10:51 PST |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-606000 cache add                                              | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:51 PST | 05 Dec 24 10:51 PST |
|         | minikube-local-cache-test:functional-606000                              |                      |         |         |                     |                     |
| cache   | functional-606000 cache delete                                           | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:51 PST | 05 Dec 24 10:51 PST |
|         | minikube-local-cache-test:functional-606000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 05 Dec 24 10:51 PST | 05 Dec 24 10:51 PST |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 05 Dec 24 10:51 PST | 05 Dec 24 10:51 PST |
| ssh     | functional-606000 ssh sudo                                               | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:51 PST |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-606000                                                        | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:51 PST |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-606000 ssh                                                    | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:51 PST |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-606000 cache reload                                           | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:51 PST | 05 Dec 24 10:51 PST |
| ssh     | functional-606000 ssh                                                    | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:51 PST |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 05 Dec 24 10:51 PST | 05 Dec 24 10:51 PST |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 05 Dec 24 10:51 PST | 05 Dec 24 10:51 PST |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-606000 kubectl --                                             | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:51 PST |                     |
|         | --context functional-606000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-606000                                                     | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:51 PST |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/12/05 10:51:17
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.23.2 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1205 10:51:17.527150    9426 out.go:345] Setting OutFile to fd 1 ...
I1205 10:51:17.527331    9426 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 10:51:17.527333    9426 out.go:358] Setting ErrFile to fd 2...
I1205 10:51:17.527335    9426 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 10:51:17.527449    9426 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
I1205 10:51:17.528475    9426 out.go:352] Setting JSON to false
I1205 10:51:17.546243    9426 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4849,"bootTime":1733419828,"procs":537,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W1205 10:51:17.546313    9426 start.go:137] gopshost.Virtualization returned error: not implemented yet
I1205 10:51:17.552746    9426 out.go:177] * [functional-606000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
I1205 10:51:17.562682    9426 out.go:177]   - MINIKUBE_LOCATION=20052
I1205 10:51:17.562715    9426 notify.go:220] Checking for updates...
I1205 10:51:17.571632    9426 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
I1205 10:51:17.575669    9426 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I1205 10:51:17.578618    9426 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1205 10:51:17.581635    9426 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
I1205 10:51:17.584641    9426 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I1205 10:51:17.587855    9426 config.go:182] Loaded profile config "functional-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1205 10:51:17.587917    9426 driver.go:394] Setting default libvirt URI to qemu:///system
I1205 10:51:17.592674    9426 out.go:177] * Using the qemu2 driver based on existing profile
I1205 10:51:17.599564    9426 start.go:297] selected driver: qemu2
I1205 10:51:17.599568    9426 start.go:901] validating driver "qemu2" against &{Name:functional-606000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:functional-606000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1205 10:51:17.599613    9426 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1205 10:51:17.602246    9426 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1205 10:51:17.602271    9426 cni.go:84] Creating CNI manager for ""
I1205 10:51:17.602296    9426 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1205 10:51:17.602364    9426 start.go:340] cluster config:
{Name:functional-606000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-606000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1205 10:51:17.606968    9426 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1205 10:51:17.613655    9426 out.go:177] * Starting "functional-606000" primary control-plane node in "functional-606000" cluster
I1205 10:51:17.617707    9426 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
I1205 10:51:17.617720    9426 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
I1205 10:51:17.617733    9426 cache.go:56] Caching tarball of preloaded images
I1205 10:51:17.617826    9426 preload.go:172] Found /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I1205 10:51:17.617830    9426 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
I1205 10:51:17.617895    9426 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/functional-606000/config.json ...
I1205 10:51:17.618434    9426 start.go:360] acquireMachinesLock for functional-606000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1205 10:51:17.618482    9426 start.go:364] duration metric: took 43.084µs to acquireMachinesLock for "functional-606000"
I1205 10:51:17.618489    9426 start.go:96] Skipping create...Using existing machine configuration
I1205 10:51:17.618493    9426 fix.go:54] fixHost starting: 
I1205 10:51:17.618619    9426 fix.go:112] recreateIfNeeded on functional-606000: state=Stopped err=<nil>
W1205 10:51:17.618625    9426 fix.go:138] unexpected machine state, will restart: <nil>
I1205 10:51:17.621607    9426 out.go:177] * Restarting existing qemu2 VM for "functional-606000" ...
I1205 10:51:17.629641    9426 qemu.go:418] Using hvf for hardware acceleration
I1205 10:51:17.629675    9426 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/functional-606000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/functional-606000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/functional-606000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:da:cf:8f:cc:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/functional-606000/disk.qcow2
I1205 10:51:17.631978    9426 main.go:141] libmachine: STDOUT: 
I1205 10:51:17.631993    9426 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1205 10:51:17.632025    9426 fix.go:56] duration metric: took 13.532708ms for fixHost
I1205 10:51:17.632028    9426 start.go:83] releasing machines lock for "functional-606000", held for 13.54375ms
W1205 10:51:17.632033    9426 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1205 10:51:17.632065    9426 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1205 10:51:17.632069    9426 start.go:729] Will try again in 5 seconds ...
I1205 10:51:22.633693    9426 start.go:360] acquireMachinesLock for functional-606000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1205 10:51:22.634096    9426 start.go:364] duration metric: took 340.083µs to acquireMachinesLock for "functional-606000"
I1205 10:51:22.634274    9426 start.go:96] Skipping create...Using existing machine configuration
I1205 10:51:22.634289    9426 fix.go:54] fixHost starting: 
I1205 10:51:22.635006    9426 fix.go:112] recreateIfNeeded on functional-606000: state=Stopped err=<nil>
W1205 10:51:22.635027    9426 fix.go:138] unexpected machine state, will restart: <nil>
I1205 10:51:22.644542    9426 out.go:177] * Restarting existing qemu2 VM for "functional-606000" ...
I1205 10:51:22.648503    9426 qemu.go:418] Using hvf for hardware acceleration
I1205 10:51:22.648655    9426 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/functional-606000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/functional-606000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/functional-606000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:da:cf:8f:cc:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/functional-606000/disk.qcow2
I1205 10:51:22.658341    9426 main.go:141] libmachine: STDOUT: 
I1205 10:51:22.658426    9426 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1205 10:51:22.658496    9426 fix.go:56] duration metric: took 24.209125ms for fixHost
I1205 10:51:22.658508    9426 start.go:83] releasing machines lock for "functional-606000", held for 24.357583ms
W1205 10:51:22.658677    9426 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-606000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1205 10:51:22.667511    9426 out.go:201] 
W1205 10:51:22.671611    9426 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1205 10:51:22.671633    9426 out.go:270] * 
W1205 10:51:22.674251    9426 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1205 10:51:22.680601    9426 out.go:201] 

                                                
                                                

                                                
                                                
* The control-plane node functional-606000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-606000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd2960125267/001/logs.txt
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-751000 | jenkins | v1.34.0 | 05 Dec 24 10:49 PST |                     |
|         | -p download-only-751000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 05 Dec 24 10:50 PST | 05 Dec 24 10:50 PST |
| delete  | -p download-only-751000                                                  | download-only-751000 | jenkins | v1.34.0 | 05 Dec 24 10:50 PST | 05 Dec 24 10:50 PST |
| start   | -o=json --download-only                                                  | download-only-386000 | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
|         | -p download-only-386000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.2                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 05 Dec 24 10:50 PST | 05 Dec 24 10:50 PST |
| delete  | -p download-only-386000                                                  | download-only-386000 | jenkins | v1.34.0 | 05 Dec 24 10:50 PST | 05 Dec 24 10:50 PST |
| delete  | -p download-only-751000                                                  | download-only-751000 | jenkins | v1.34.0 | 05 Dec 24 10:50 PST | 05 Dec 24 10:50 PST |
| delete  | -p download-only-386000                                                  | download-only-386000 | jenkins | v1.34.0 | 05 Dec 24 10:50 PST | 05 Dec 24 10:50 PST |
| start   | --download-only -p                                                       | binary-mirror-193000 | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
|         | binary-mirror-193000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51554                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-193000                                                  | binary-mirror-193000 | jenkins | v1.34.0 | 05 Dec 24 10:50 PST | 05 Dec 24 10:50 PST |
| addons  | enable dashboard -p                                                      | addons-904000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
|         | addons-904000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-904000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
|         | addons-904000                                                            |                      |         |         |                     |                     |
| start   | -p addons-904000 --wait=true                                             | addons-904000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --addons=amd-gpu-device-plugin                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
| delete  | -p addons-904000                                                         | addons-904000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST | 05 Dec 24 10:50 PST |
| start   | -p nospam-846000 -n=1 --memory=2250 --wait=false                         | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-846000 --log_dir                                                  | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-846000 --log_dir                                                  | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-846000 --log_dir                                                  | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-846000 --log_dir                                                  | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-846000 --log_dir                                                  | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-846000 --log_dir                                                  | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-846000 --log_dir                                                  | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-846000 --log_dir                                                  | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-846000 --log_dir                                                  | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-846000 --log_dir                                                  | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST | 05 Dec 24 10:50 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-846000 --log_dir                                                  | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST | 05 Dec 24 10:50 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-846000 --log_dir                                                  | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST | 05 Dec 24 10:50 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-846000                                                         | nospam-846000        | jenkins | v1.34.0 | 05 Dec 24 10:50 PST | 05 Dec 24 10:50 PST |
| start   | -p functional-606000                                                     | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-606000                                                     | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:51 PST |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-606000 cache add                                              | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:51 PST | 05 Dec 24 10:51 PST |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-606000 cache add                                              | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:51 PST | 05 Dec 24 10:51 PST |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-606000 cache add                                              | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:51 PST | 05 Dec 24 10:51 PST |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-606000 cache add                                              | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:51 PST | 05 Dec 24 10:51 PST |
|         | minikube-local-cache-test:functional-606000                              |                      |         |         |                     |                     |
| cache   | functional-606000 cache delete                                           | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:51 PST | 05 Dec 24 10:51 PST |
|         | minikube-local-cache-test:functional-606000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 05 Dec 24 10:51 PST | 05 Dec 24 10:51 PST |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 05 Dec 24 10:51 PST | 05 Dec 24 10:51 PST |
| ssh     | functional-606000 ssh sudo                                               | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:51 PST |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-606000                                                        | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:51 PST |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-606000 ssh                                                    | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:51 PST |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-606000 cache reload                                           | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:51 PST | 05 Dec 24 10:51 PST |
| ssh     | functional-606000 ssh                                                    | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:51 PST |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 05 Dec 24 10:51 PST | 05 Dec 24 10:51 PST |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 05 Dec 24 10:51 PST | 05 Dec 24 10:51 PST |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-606000 kubectl --                                             | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:51 PST |                     |
|         | --context functional-606000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-606000                                                     | functional-606000    | jenkins | v1.34.0 | 05 Dec 24 10:51 PST |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/12/05 10:51:17
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.23.2 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1205 10:51:17.527150    9426 out.go:345] Setting OutFile to fd 1 ...
I1205 10:51:17.527331    9426 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 10:51:17.527333    9426 out.go:358] Setting ErrFile to fd 2...
I1205 10:51:17.527335    9426 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 10:51:17.527449    9426 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
I1205 10:51:17.528475    9426 out.go:352] Setting JSON to false
I1205 10:51:17.546243    9426 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4849,"bootTime":1733419828,"procs":537,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W1205 10:51:17.546313    9426 start.go:137] gopshost.Virtualization returned error: not implemented yet
I1205 10:51:17.552746    9426 out.go:177] * [functional-606000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
I1205 10:51:17.562682    9426 out.go:177]   - MINIKUBE_LOCATION=20052
I1205 10:51:17.562715    9426 notify.go:220] Checking for updates...
I1205 10:51:17.571632    9426 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
I1205 10:51:17.575669    9426 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I1205 10:51:17.578618    9426 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1205 10:51:17.581635    9426 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
I1205 10:51:17.584641    9426 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I1205 10:51:17.587855    9426 config.go:182] Loaded profile config "functional-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1205 10:51:17.587917    9426 driver.go:394] Setting default libvirt URI to qemu:///system
I1205 10:51:17.592674    9426 out.go:177] * Using the qemu2 driver based on existing profile
I1205 10:51:17.599564    9426 start.go:297] selected driver: qemu2
I1205 10:51:17.599568    9426 start.go:901] validating driver "qemu2" against &{Name:functional-606000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:functional-606000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1205 10:51:17.599613    9426 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1205 10:51:17.602246    9426 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1205 10:51:17.602271    9426 cni.go:84] Creating CNI manager for ""
I1205 10:51:17.602296    9426 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1205 10:51:17.602364    9426 start.go:340] cluster config:
{Name:functional-606000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-606000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1205 10:51:17.606968    9426 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1205 10:51:17.613655    9426 out.go:177] * Starting "functional-606000" primary control-plane node in "functional-606000" cluster
I1205 10:51:17.617707    9426 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
I1205 10:51:17.617720    9426 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
I1205 10:51:17.617733    9426 cache.go:56] Caching tarball of preloaded images
I1205 10:51:17.617826    9426 preload.go:172] Found /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I1205 10:51:17.617830    9426 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
I1205 10:51:17.617895    9426 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/functional-606000/config.json ...
I1205 10:51:17.618434    9426 start.go:360] acquireMachinesLock for functional-606000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1205 10:51:17.618482    9426 start.go:364] duration metric: took 43.084µs to acquireMachinesLock for "functional-606000"
I1205 10:51:17.618489    9426 start.go:96] Skipping create...Using existing machine configuration
I1205 10:51:17.618493    9426 fix.go:54] fixHost starting: 
I1205 10:51:17.618619    9426 fix.go:112] recreateIfNeeded on functional-606000: state=Stopped err=<nil>
W1205 10:51:17.618625    9426 fix.go:138] unexpected machine state, will restart: <nil>
I1205 10:51:17.621607    9426 out.go:177] * Restarting existing qemu2 VM for "functional-606000" ...
I1205 10:51:17.629641    9426 qemu.go:418] Using hvf for hardware acceleration
I1205 10:51:17.629675    9426 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/functional-606000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/functional-606000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/functional-606000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:da:cf:8f:cc:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/functional-606000/disk.qcow2
I1205 10:51:17.631978    9426 main.go:141] libmachine: STDOUT: 
I1205 10:51:17.631993    9426 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1205 10:51:17.632025    9426 fix.go:56] duration metric: took 13.532708ms for fixHost
I1205 10:51:17.632028    9426 start.go:83] releasing machines lock for "functional-606000", held for 13.54375ms
W1205 10:51:17.632033    9426 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1205 10:51:17.632065    9426 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1205 10:51:17.632069    9426 start.go:729] Will try again in 5 seconds ...
I1205 10:51:22.633693    9426 start.go:360] acquireMachinesLock for functional-606000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1205 10:51:22.634096    9426 start.go:364] duration metric: took 340.083µs to acquireMachinesLock for "functional-606000"
I1205 10:51:22.634274    9426 start.go:96] Skipping create...Using existing machine configuration
I1205 10:51:22.634289    9426 fix.go:54] fixHost starting: 
I1205 10:51:22.635006    9426 fix.go:112] recreateIfNeeded on functional-606000: state=Stopped err=<nil>
W1205 10:51:22.635027    9426 fix.go:138] unexpected machine state, will restart: <nil>
I1205 10:51:22.644542    9426 out.go:177] * Restarting existing qemu2 VM for "functional-606000" ...
I1205 10:51:22.648503    9426 qemu.go:418] Using hvf for hardware acceleration
I1205 10:51:22.648655    9426 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/functional-606000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/functional-606000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/functional-606000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:da:cf:8f:cc:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/functional-606000/disk.qcow2
I1205 10:51:22.658341    9426 main.go:141] libmachine: STDOUT: 
I1205 10:51:22.658426    9426 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1205 10:51:22.658496    9426 fix.go:56] duration metric: took 24.209125ms for fixHost
I1205 10:51:22.658508    9426 start.go:83] releasing machines lock for "functional-606000", held for 24.357583ms
W1205 10:51:22.658677    9426 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-606000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1205 10:51:22.667511    9426 out.go:201] 
W1205 10:51:22.671611    9426 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1205 10:51:22.671633    9426 out.go:270] * 
W1205 10:51:22.674251    9426 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1205 10:51:22.680601    9426 out.go:201] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-606000 apply -f testdata/invalidsvc.yaml
functional_test.go:2321: (dbg) Non-zero exit: kubectl --context functional-606000 apply -f testdata/invalidsvc.yaml: exit status 1 (28.448917ms)

                                                
                                                
** stderr ** 
	error: context "functional-606000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2323: kubectl --context functional-606000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-606000 --alsologtostderr -v=1]
functional_test.go:918: output didn't produce a URL
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-606000 --alsologtostderr -v=1] ...
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-606000 --alsologtostderr -v=1] stdout:
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-606000 --alsologtostderr -v=1] stderr:
I1205 10:52:03.903018    9739 out.go:345] Setting OutFile to fd 1 ...
I1205 10:52:03.903453    9739 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 10:52:03.903456    9739 out.go:358] Setting ErrFile to fd 2...
I1205 10:52:03.903458    9739 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 10:52:03.903595    9739 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
I1205 10:52:03.903825    9739 mustload.go:65] Loading cluster: functional-606000
I1205 10:52:03.904052    9739 config.go:182] Loaded profile config "functional-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1205 10:52:03.907981    9739 out.go:177] * The control-plane node functional-606000 host is not running: state=Stopped
I1205 10:52:03.912024    9739 out.go:177]   To start a cluster, run: "minikube start -p functional-606000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-606000 -n functional-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-606000 -n functional-606000: exit status 7 (47.024291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 status
functional_test.go:854: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 status: exit status 7 (34.161125ms)

                                                
                                                
-- stdout --
	functional-606000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:856: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-606000 status" : exit status 7
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:860: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (33.573625ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:862: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-606000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 status -o json
functional_test.go:872: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 status -o json: exit status 7 (34.952916ms)

                                                
                                                
-- stdout --
	{"Name":"functional-606000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:874: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-606000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-606000 -n functional-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-606000 -n functional-606000: exit status 7 (34.784833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-606000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1627: (dbg) Non-zero exit: kubectl --context functional-606000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.703375ms)

                                                
                                                
** stderr ** 
	error: context "functional-606000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1633: failed to create hello-node deployment with this command "kubectl --context functional-606000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-606000 describe po hello-node-connect
functional_test.go:1602: (dbg) Non-zero exit: kubectl --context functional-606000 describe po hello-node-connect: exit status 1 (26.384ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-606000

                                                
                                                
** /stderr **
functional_test.go:1604: "kubectl --context functional-606000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1606: hello-node pod describe:
functional_test.go:1608: (dbg) Run:  kubectl --context functional-606000 logs -l app=hello-node-connect
functional_test.go:1608: (dbg) Non-zero exit: kubectl --context functional-606000 logs -l app=hello-node-connect: exit status 1 (26.523375ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-606000

                                                
                                                
** /stderr **
functional_test.go:1610: "kubectl --context functional-606000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1612: hello-node logs:
functional_test.go:1614: (dbg) Run:  kubectl --context functional-606000 describe svc hello-node-connect
functional_test.go:1614: (dbg) Non-zero exit: kubectl --context functional-606000 describe svc hello-node-connect: exit status 1 (26.228083ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-606000

                                                
                                                
** /stderr **
functional_test.go:1616: "kubectl --context functional-606000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1618: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-606000 -n functional-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-606000 -n functional-606000: exit status 7 (33.100834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-606000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-606000 -n functional-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-606000 -n functional-606000: exit status 7 (35.079709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 ssh "echo hello"
functional_test.go:1725: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 ssh "echo hello": exit status 83 (48.265584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
functional_test.go:1730: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-606000 ssh \"echo hello\"" : exit status 83
functional_test.go:1734: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-606000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-606000\"\n"*. args "out/minikube-darwin-arm64 -p functional-606000 ssh \"echo hello\""
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 ssh "cat /etc/hostname"
functional_test.go:1742: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 ssh "cat /etc/hostname": exit status 83 (47.812083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
functional_test.go:1748: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-606000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1752: expected minikube ssh command output to be -"functional-606000"- but got *"* The control-plane node functional-606000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-606000\"\n"*. args "out/minikube-darwin-arm64 -p functional-606000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-606000 -n functional-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-606000 -n functional-606000: exit status 7 (35.297375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (61.312458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-606000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 ssh -n functional-606000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 ssh -n functional-606000 "sudo cat /home/docker/cp-test.txt": exit status 83 (43.079167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-606000 ssh -n functional-606000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-606000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-606000\"\n",
  }, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 cp functional-606000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd715957612/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 cp functional-606000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd715957612/001/cp-test.txt: exit status 83 (47.315917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-606000 cp functional-606000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd715957612/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 ssh -n functional-606000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 ssh -n functional-606000 "sudo cat /home/docker/cp-test.txt": exit status 83 (45.837708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-606000 ssh -n functional-606000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd715957612/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"* The control-plane node functional-606000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-606000\"\n",
+ 	"",
  )
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (46.100958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-606000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 ssh -n functional-606000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 ssh -n functional-606000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (48.563334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-606000 ssh -n functional-606000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-606000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-606000\"\n",
  }, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/9136/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 ssh "sudo cat /etc/test/nested/copy/9136/hosts"
functional_test.go:1931: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 ssh "sudo cat /etc/test/nested/copy/9136/hosts": exit status 83 (51.698042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
functional_test.go:1933: out/minikube-darwin-arm64 -p functional-606000 ssh "sudo cat /etc/test/nested/copy/9136/hosts" failed: exit status 83
functional_test.go:1936: file sync test content: * The control-plane node functional-606000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-606000"
functional_test.go:1946: /etc/sync.test content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-606000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-606000\"\n",
  }, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-606000 -n functional-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-606000 -n functional-606000: exit status 7 (34.996167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/9136.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 ssh "sudo cat /etc/ssl/certs/9136.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 ssh "sudo cat /etc/ssl/certs/9136.pem": exit status 83 (45.301667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/9136.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-606000 ssh \"sudo cat /etc/ssl/certs/9136.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/9136.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-606000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-606000"
  	"""
  )
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/9136.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 ssh "sudo cat /usr/share/ca-certificates/9136.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 ssh "sudo cat /usr/share/ca-certificates/9136.pem": exit status 83 (42.609542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/usr/share/ca-certificates/9136.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-606000 ssh \"sudo cat /usr/share/ca-certificates/9136.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/9136.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-606000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-606000"
  	"""
  )
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (49.5545ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-606000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-606000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-606000"
  	"""
  )
functional_test.go:1999: Checking for existence of /etc/ssl/certs/91362.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 ssh "sudo cat /etc/ssl/certs/91362.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 ssh "sudo cat /etc/ssl/certs/91362.pem": exit status 83 (43.591292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/91362.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-606000 ssh \"sudo cat /etc/ssl/certs/91362.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/91362.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-606000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-606000"
  	"""
  )
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/91362.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 ssh "sudo cat /usr/share/ca-certificates/91362.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 ssh "sudo cat /usr/share/ca-certificates/91362.pem": exit status 83 (44.66325ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/usr/share/ca-certificates/91362.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-606000 ssh \"sudo cat /usr/share/ca-certificates/91362.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/91362.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-606000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-606000"
  	"""
  )
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (46.598459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-606000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-606000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-606000"
  	"""
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-606000 -n functional-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-606000 -n functional-606000: exit status 7 (34.742083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-606000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:219: (dbg) Non-zero exit: kubectl --context functional-606000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (27.762458ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-606000

                                                
                                                
** /stderr **
functional_test.go:221: failed to 'kubectl get nodes' with args "kubectl --context functional-606000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:227: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-606000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-606000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-606000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-606000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-606000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-606000 -n functional-606000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-606000 -n functional-606000: exit status 7 (35.580833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-606000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 ssh "sudo systemctl is-active crio": exit status 83 (44.434833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
functional_test.go:2030: output of 
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2033: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-606000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-606000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 version -o=json --components
functional_test.go:2270: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 version -o=json --components: exit status 83 (44.965125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
functional_test.go:2272: error version: exit status 83
functional_test.go:2277: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-606000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-606000"
functional_test.go:2277: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-606000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-606000"
functional_test.go:2277: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-606000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-606000"
functional_test.go:2277: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-606000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-606000"
functional_test.go:2277: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-606000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-606000"
functional_test.go:2277: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-606000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-606000"
functional_test.go:2277: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-606000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-606000"
functional_test.go:2277: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-606000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-606000"
functional_test.go:2277: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-606000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-606000"
functional_test.go:2277: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-606000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-606000"
--- FAIL: TestFunctional/parallel/Version/components (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-606000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-606000 image ls --format short --alsologtostderr:
I1205 10:52:04.342013    9754 out.go:345] Setting OutFile to fd 1 ...
I1205 10:52:04.342218    9754 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 10:52:04.342221    9754 out.go:358] Setting ErrFile to fd 2...
I1205 10:52:04.342224    9754 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 10:52:04.342358    9754 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
I1205 10:52:04.342773    9754 config.go:182] Loaded profile config "functional-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1205 10:52:04.342840    9754 config.go:182] Loaded profile config "functional-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:275: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-606000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-606000 image ls --format table --alsologtostderr:
I1205 10:52:04.588884    9766 out.go:345] Setting OutFile to fd 1 ...
I1205 10:52:04.589090    9766 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 10:52:04.589093    9766 out.go:358] Setting ErrFile to fd 2...
I1205 10:52:04.589096    9766 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 10:52:04.589224    9766 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
I1205 10:52:04.589998    9766 config.go:182] Loaded profile config "functional-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1205 10:52:04.590093    9766 config.go:182] Loaded profile config "functional-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:275: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
I1205 10:52:23.912660    9136 retry.go:31] will retry after 23.857014084s: Temporary Error: Get "http:": http: no Host in request URL
I1205 10:52:47.771256    9136 retry.go:31] will retry after 27.536301539s: Temporary Error: Get "http:": http: no Host in request URL
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-606000 image ls --format json --alsologtostderr:
[]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-606000 image ls --format json --alsologtostderr:
I1205 10:52:04.550381    9764 out.go:345] Setting OutFile to fd 1 ...
I1205 10:52:04.550583    9764 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 10:52:04.550587    9764 out.go:358] Setting ErrFile to fd 2...
I1205 10:52:04.550589    9764 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 10:52:04.550701    9764 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
I1205 10:52:04.551140    9764 config.go:182] Loaded profile config "functional-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1205 10:52:04.551203    9764 config.go:182] Loaded profile config "functional-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:275: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-606000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-606000 image ls --format yaml --alsologtostderr:
I1205 10:52:04.382515    9756 out.go:345] Setting OutFile to fd 1 ...
I1205 10:52:04.382725    9756 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 10:52:04.382728    9756 out.go:358] Setting ErrFile to fd 2...
I1205 10:52:04.382731    9756 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 10:52:04.382873    9756 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
I1205 10:52:04.383373    9756 config.go:182] Loaded profile config "functional-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1205 10:52:04.383435    9756 config.go:182] Loaded profile config "functional-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:275: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 ssh pgrep buildkitd: exit status 83 (45.924291ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 image build -t localhost/my-image:functional-606000 testdata/build --alsologtostderr
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-606000 image build -t localhost/my-image:functional-606000 testdata/build --alsologtostderr:
I1205 10:52:04.469583    9760 out.go:345] Setting OutFile to fd 1 ...
I1205 10:52:04.470149    9760 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 10:52:04.470152    9760 out.go:358] Setting ErrFile to fd 2...
I1205 10:52:04.470155    9760 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 10:52:04.470333    9760 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
I1205 10:52:04.470741    9760 config.go:182] Loaded profile config "functional-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1205 10:52:04.471180    9760 config.go:182] Loaded profile config "functional-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1205 10:52:04.471412    9760 build_images.go:133] succeeded building to: 
I1205 10:52:04.471415    9760 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 image ls
functional_test.go:446: expected "localhost/my-image:functional-606000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-606000 docker-env) && out/minikube-darwin-arm64 status -p functional-606000"
functional_test.go:499: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-606000 docker-env) && out/minikube-darwin-arm64 status -p functional-606000": exit status 1 (47.633416ms)
functional_test.go:505: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 update-context --alsologtostderr -v=2: exit status 83 (47.71175ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 10:52:04.198545    9748 out.go:345] Setting OutFile to fd 1 ...
	I1205 10:52:04.199530    9748 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:52:04.199533    9748 out.go:358] Setting ErrFile to fd 2...
	I1205 10:52:04.199535    9748 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:52:04.199705    9748 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 10:52:04.199925    9748 mustload.go:65] Loading cluster: functional-606000
	I1205 10:52:04.200116    9748 config.go:182] Loaded profile config "functional-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 10:52:04.204417    9748 out.go:177] * The control-plane node functional-606000 host is not running: state=Stopped
	I1205 10:52:04.208415    9748 out.go:177]   To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-606000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-606000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-606000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 update-context --alsologtostderr -v=2: exit status 83 (47.372167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 10:52:04.293856    9752 out.go:345] Setting OutFile to fd 1 ...
	I1205 10:52:04.294027    9752 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:52:04.294030    9752 out.go:358] Setting ErrFile to fd 2...
	I1205 10:52:04.294032    9752 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:52:04.294175    9752 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 10:52:04.294433    9752 mustload.go:65] Loading cluster: functional-606000
	I1205 10:52:04.294638    9752 config.go:182] Loaded profile config "functional-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 10:52:04.299482    9752 out.go:177] * The control-plane node functional-606000 host is not running: state=Stopped
	I1205 10:52:04.303447    9752 out.go:177]   To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-606000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-606000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-606000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 update-context --alsologtostderr -v=2: exit status 83 (46.612042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 10:52:04.245617    9750 out.go:345] Setting OutFile to fd 1 ...
	I1205 10:52:04.245778    9750 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:52:04.245782    9750 out.go:358] Setting ErrFile to fd 2...
	I1205 10:52:04.245784    9750 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:52:04.245899    9750 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 10:52:04.246121    9750 mustload.go:65] Loading cluster: functional-606000
	I1205 10:52:04.246332    9750 config.go:182] Loaded profile config "functional-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 10:52:04.251308    9750 out.go:177] * The control-plane node functional-606000 host is not running: state=Stopped
	I1205 10:52:04.255371    9750 out.go:177]   To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-606000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-606000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-606000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-606000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1437: (dbg) Non-zero exit: kubectl --context functional-606000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.904917ms)

                                                
                                                
** stderr ** 
	error: context "functional-606000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1443: failed to create hello-node deployment with this command "kubectl --context functional-606000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 service list
functional_test.go:1459: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 service list: exit status 83 (57.426792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
functional_test.go:1461: failed to do service list. args "out/minikube-darwin-arm64 -p functional-606000 service list" : exit status 83
functional_test.go:1464: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-606000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-606000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 service list -o json
functional_test.go:1489: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 service list -o json: exit status 83 (44.7175ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
functional_test.go:1491: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-606000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 service --namespace=default --https --url hello-node
functional_test.go:1509: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 service --namespace=default --https --url hello-node: exit status 83 (51.849417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
functional_test.go:1511: failed to get service url. args "out/minikube-darwin-arm64 -p functional-606000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 service hello-node --url --format={{.IP}}
functional_test.go:1540: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 service hello-node --url --format={{.IP}}: exit status 83 (46.692125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
functional_test.go:1542: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-606000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1548: "* The control-plane node functional-606000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-606000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 service hello-node --url
functional_test.go:1559: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 service hello-node --url: exit status 83 (45.692875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
functional_test.go:1561: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-606000 service hello-node --url": exit status 83
functional_test.go:1565: found endpoint for hello-node: * The control-plane node functional-606000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-606000"
functional_test.go:1569: failed to parse "* The control-plane node functional-606000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-606000\"": parse "* The control-plane node functional-606000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-606000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-606000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-606000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I1205 10:51:24.660676    9543 out.go:345] Setting OutFile to fd 1 ...
I1205 10:51:24.660920    9543 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 10:51:24.660923    9543 out.go:358] Setting ErrFile to fd 2...
I1205 10:51:24.660926    9543 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 10:51:24.661060    9543 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
I1205 10:51:24.661271    9543 mustload.go:65] Loading cluster: functional-606000
I1205 10:51:24.661500    9543 config.go:182] Loaded profile config "functional-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1205 10:51:24.665883    9543 out.go:177] * The control-plane node functional-606000 host is not running: state=Stopped
I1205 10:51:24.673868    9543 out.go:177]   To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
stdout: * The control-plane node functional-606000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-606000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-606000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-606000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-606000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-606000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 9544: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-606000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-606000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-606000": client config: context "functional-606000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (110.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I1205 10:51:24.739196    9136 retry.go:31] will retry after 2.2994824s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-606000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-606000 get svc nginx-svc: exit status 1 (69.699959ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-606000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-606000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (110.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 image load --daemon kicbase/echo-server:functional-606000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-606000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 image load --daemon kicbase/echo-server:functional-606000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-606000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-606000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 image load --daemon kicbase/echo-server:functional-606000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 image ls
I1205 10:51:27.041153    9136 retry.go:31] will retry after 3.63156214s: Temporary Error: Get "http:": http: no Host in request URL
functional_test.go:446: expected "kicbase/echo-server:functional-606000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 image save kicbase/echo-server:functional-606000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:386: expected "/Users/jenkins/workspace/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-606000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I1205 10:53:15.395658    9136 config.go:182] Loaded profile config "functional-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.033885708s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 10 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (30.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I1205 10:53:40.526701    9136 config.go:182] Loaded profile config "functional-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1205 10:53:50.527821    9136 retry.go:31] will retry after 3.045884087s: Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I1205 10:54:03.577885    9136 retry.go:31] will retry after 2.827400288s: Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": dial tcp: lookup nginx-svc.default.svc.cluster.local. on 8.8.8.8:53: read udp 207.254.73.72:60250->10.96.0.10:53: i/o timeout
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (30.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (10.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-144000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-144000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.997496167s)

                                                
                                                
-- stdout --
	* [ha-144000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-144000" primary control-plane node in "ha-144000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-144000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 10:54:10.941732    9819 out.go:345] Setting OutFile to fd 1 ...
	I1205 10:54:10.941901    9819 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:54:10.941904    9819 out.go:358] Setting ErrFile to fd 2...
	I1205 10:54:10.941907    9819 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:54:10.942037    9819 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 10:54:10.943235    9819 out.go:352] Setting JSON to false
	I1205 10:54:10.960976    9819 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5022,"bootTime":1733419828,"procs":543,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 10:54:10.961050    9819 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 10:54:10.968187    9819 out.go:177] * [ha-144000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 10:54:10.975161    9819 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 10:54:10.975210    9819 notify.go:220] Checking for updates...
	I1205 10:54:10.981164    9819 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 10:54:10.984142    9819 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 10:54:10.988102    9819 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 10:54:10.991159    9819 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	I1205 10:54:10.994148    9819 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 10:54:10.997249    9819 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 10:54:11.001153    9819 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 10:54:11.008120    9819 start.go:297] selected driver: qemu2
	I1205 10:54:11.008142    9819 start.go:901] validating driver "qemu2" against <nil>
	I1205 10:54:11.008154    9819 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 10:54:11.010806    9819 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 10:54:11.015106    9819 out.go:177] * Automatically selected the socket_vmnet network
	I1205 10:54:11.018194    9819 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 10:54:11.018220    9819 cni.go:84] Creating CNI manager for ""
	I1205 10:54:11.018241    9819 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1205 10:54:11.018246    9819 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 10:54:11.018305    9819 start.go:340] cluster config:
	{Name:ha-144000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-144000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 10:54:11.022921    9819 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 10:54:11.030173    9819 out.go:177] * Starting "ha-144000" primary control-plane node in "ha-144000" cluster
	I1205 10:54:11.034147    9819 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 10:54:11.034168    9819 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 10:54:11.034180    9819 cache.go:56] Caching tarball of preloaded images
	I1205 10:54:11.034273    9819 preload.go:172] Found /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 10:54:11.034279    9819 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 10:54:11.034501    9819 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/ha-144000/config.json ...
	I1205 10:54:11.034513    9819 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/ha-144000/config.json: {Name:mk832b08406a61ec6f1d61fef3f2217a6bfa6ee7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 10:54:11.035027    9819 start.go:360] acquireMachinesLock for ha-144000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 10:54:11.035078    9819 start.go:364] duration metric: took 45.291µs to acquireMachinesLock for "ha-144000"
	I1205 10:54:11.035090    9819 start.go:93] Provisioning new machine with config: &{Name:ha-144000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.2 ClusterName:ha-144000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 10:54:11.035124    9819 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 10:54:11.043111    9819 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 10:54:11.060801    9819 start.go:159] libmachine.API.Create for "ha-144000" (driver="qemu2")
	I1205 10:54:11.060833    9819 client.go:168] LocalClient.Create starting
	I1205 10:54:11.060907    9819 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 10:54:11.060948    9819 main.go:141] libmachine: Decoding PEM data...
	I1205 10:54:11.060963    9819 main.go:141] libmachine: Parsing certificate...
	I1205 10:54:11.061002    9819 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 10:54:11.061033    9819 main.go:141] libmachine: Decoding PEM data...
	I1205 10:54:11.061042    9819 main.go:141] libmachine: Parsing certificate...
	I1205 10:54:11.061512    9819 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 10:54:11.224081    9819 main.go:141] libmachine: Creating SSH key...
	I1205 10:54:11.468319    9819 main.go:141] libmachine: Creating Disk image...
	I1205 10:54:11.468327    9819 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 10:54:11.468609    9819 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/ha-144000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/ha-144000/disk.qcow2
	I1205 10:54:11.479227    9819 main.go:141] libmachine: STDOUT: 
	I1205 10:54:11.479246    9819 main.go:141] libmachine: STDERR: 
	I1205 10:54:11.479309    9819 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/ha-144000/disk.qcow2 +20000M
	I1205 10:54:11.487872    9819 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 10:54:11.487895    9819 main.go:141] libmachine: STDERR: 
	I1205 10:54:11.487909    9819 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/ha-144000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/ha-144000/disk.qcow2
	I1205 10:54:11.487918    9819 main.go:141] libmachine: Starting QEMU VM...
	I1205 10:54:11.487929    9819 qemu.go:418] Using hvf for hardware acceleration
	I1205 10:54:11.487953    9819 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/ha-144000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/ha-144000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/ha-144000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:11:bb:13:a6:58 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/ha-144000/disk.qcow2
	I1205 10:54:11.489800    9819 main.go:141] libmachine: STDOUT: 
	I1205 10:54:11.489813    9819 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 10:54:11.489835    9819 client.go:171] duration metric: took 429.00075ms to LocalClient.Create
	I1205 10:54:13.491990    9819 start.go:128] duration metric: took 2.456873583s to createHost
	I1205 10:54:13.492060    9819 start.go:83] releasing machines lock for "ha-144000", held for 2.456998833s
	W1205 10:54:13.492111    9819 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 10:54:13.504491    9819 out.go:177] * Deleting "ha-144000" in qemu2 ...
	W1205 10:54:13.533858    9819 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 10:54:13.533887    9819 start.go:729] Will try again in 5 seconds ...
	I1205 10:54:18.535992    9819 start.go:360] acquireMachinesLock for ha-144000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 10:54:18.536502    9819 start.go:364] duration metric: took 432.833µs to acquireMachinesLock for "ha-144000"
	I1205 10:54:18.536624    9819 start.go:93] Provisioning new machine with config: &{Name:ha-144000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.2 ClusterName:ha-144000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 10:54:18.536936    9819 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 10:54:18.541724    9819 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 10:54:18.588743    9819 start.go:159] libmachine.API.Create for "ha-144000" (driver="qemu2")
	I1205 10:54:18.588799    9819 client.go:168] LocalClient.Create starting
	I1205 10:54:18.588927    9819 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 10:54:18.589019    9819 main.go:141] libmachine: Decoding PEM data...
	I1205 10:54:18.589041    9819 main.go:141] libmachine: Parsing certificate...
	I1205 10:54:18.589109    9819 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 10:54:18.589165    9819 main.go:141] libmachine: Decoding PEM data...
	I1205 10:54:18.589176    9819 main.go:141] libmachine: Parsing certificate...
	I1205 10:54:18.589942    9819 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 10:54:18.771992    9819 main.go:141] libmachine: Creating SSH key...
	I1205 10:54:18.833945    9819 main.go:141] libmachine: Creating Disk image...
	I1205 10:54:18.833950    9819 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 10:54:18.834177    9819 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/ha-144000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/ha-144000/disk.qcow2
	I1205 10:54:18.844216    9819 main.go:141] libmachine: STDOUT: 
	I1205 10:54:18.844251    9819 main.go:141] libmachine: STDERR: 
	I1205 10:54:18.844303    9819 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/ha-144000/disk.qcow2 +20000M
	I1205 10:54:18.852740    9819 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 10:54:18.852754    9819 main.go:141] libmachine: STDERR: 
	I1205 10:54:18.852766    9819 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/ha-144000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/ha-144000/disk.qcow2
	I1205 10:54:18.852770    9819 main.go:141] libmachine: Starting QEMU VM...
	I1205 10:54:18.852778    9819 qemu.go:418] Using hvf for hardware acceleration
	I1205 10:54:18.852813    9819 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/ha-144000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/ha-144000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/ha-144000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:9b:9d:6d:04:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/ha-144000/disk.qcow2
	I1205 10:54:18.854618    9819 main.go:141] libmachine: STDOUT: 
	I1205 10:54:18.854639    9819 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 10:54:18.854651    9819 client.go:171] duration metric: took 265.848292ms to LocalClient.Create
	I1205 10:54:20.856805    9819 start.go:128] duration metric: took 2.319863958s to createHost
	I1205 10:54:20.856935    9819 start.go:83] releasing machines lock for "ha-144000", held for 2.320374291s
	W1205 10:54:20.857263    9819 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-144000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-144000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 10:54:20.872856    9819 out.go:201] 
	W1205 10:54:20.875964    9819 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 10:54:20.875994    9819 out.go:270] * 
	* 
	W1205 10:54:20.878548    9819 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 10:54:20.893948    9819 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-144000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-144000 -n ha-144000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-144000 -n ha-144000: exit status 7 (73.153584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-144000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (10.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (114.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-144000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-144000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (63.728667ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-144000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-144000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-144000 -- rollout status deployment/busybox: exit status 1 (63.015166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-144000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-144000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-144000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (61.898167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-144000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1205 10:54:21.171043    9136 retry.go:31] will retry after 1.385363274s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-144000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-144000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.358416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-144000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1205 10:54:22.667072    9136 retry.go:31] will retry after 1.408296793s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-144000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-144000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.433666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-144000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1205 10:54:24.186180    9136 retry.go:31] will retry after 3.339383387s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-144000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-144000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.069792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-144000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1205 10:54:27.636932    9136 retry.go:31] will retry after 2.64251258s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-144000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-144000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.749416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-144000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1205 10:54:30.391470    9136 retry.go:31] will retry after 4.837391575s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-144000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-144000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.047792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-144000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1205 10:54:35.338291    9136 retry.go:31] will retry after 5.003364591s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-144000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-144000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.272041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-144000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1205 10:54:40.452322    9136 retry.go:31] will retry after 10.60875231s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-144000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-144000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.062042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-144000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1205 10:54:51.172564    9136 retry.go:31] will retry after 24.057238109s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-144000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-144000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.501209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-144000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1205 10:55:15.337723    9136 retry.go:31] will retry after 24.710336547s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-144000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-144000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.411042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-144000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1205 10:55:40.159622    9136 retry.go:31] will retry after 35.330849723s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-144000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-144000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.568792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-144000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-144000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-144000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (62.230792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-144000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-144000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-144000 -- exec  -- nslookup kubernetes.io: exit status 1 (62.768459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-144000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-144000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-144000 -- exec  -- nslookup kubernetes.default: exit status 1 (62.872042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-144000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-144000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-144000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (62.925459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-144000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-144000 -n ha-144000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-144000 -n ha-144000: exit status 7 (34.619042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-144000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (114.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-144000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-144000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (62.685333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-144000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-144000 -n ha-144000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-144000 -n ha-144000: exit status 7 (35.0365ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-144000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-144000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-144000 -v=7 --alsologtostderr: exit status 83 (49.006708ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-144000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-144000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 10:56:16.019227    9905 out.go:345] Setting OutFile to fd 1 ...
	I1205 10:56:16.019586    9905 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:56:16.019590    9905 out.go:358] Setting ErrFile to fd 2...
	I1205 10:56:16.019592    9905 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:56:16.019800    9905 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 10:56:16.020037    9905 mustload.go:65] Loading cluster: ha-144000
	I1205 10:56:16.020257    9905 config.go:182] Loaded profile config "ha-144000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 10:56:16.025228    9905 out.go:177] * The control-plane node ha-144000 host is not running: state=Stopped
	I1205 10:56:16.030397    9905 out.go:177]   To start a cluster, run: "minikube start -p ha-144000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-144000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-144000 -n ha-144000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-144000 -n ha-144000: exit status 7 (35.097833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-144000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-144000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-144000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.943666ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-144000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-144000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-144000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-144000 -n ha-144000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-144000 -n ha-144000: exit status 7 (34.608125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-144000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-144000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-144000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-144000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-144000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-144000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-144000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-144000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-144000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-144000 -n ha-144000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-144000 -n ha-144000: exit status 7 (34.593292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-144000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-darwin-arm64 -p ha-144000 status --output json -v=7 --alsologtostderr
ha_test.go:328: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-144000 status --output json -v=7 --alsologtostderr: exit status 7 (34.8305ms)

                                                
                                                
-- stdout --
	{"Name":"ha-144000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 10:56:16.253214    9917 out.go:345] Setting OutFile to fd 1 ...
	I1205 10:56:16.253404    9917 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:56:16.253410    9917 out.go:358] Setting ErrFile to fd 2...
	I1205 10:56:16.253413    9917 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:56:16.253552    9917 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 10:56:16.253686    9917 out.go:352] Setting JSON to true
	I1205 10:56:16.253697    9917 mustload.go:65] Loading cluster: ha-144000
	I1205 10:56:16.253757    9917 notify.go:220] Checking for updates...
	I1205 10:56:16.253921    9917 config.go:182] Loaded profile config "ha-144000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 10:56:16.253929    9917 status.go:174] checking status of ha-144000 ...
	I1205 10:56:16.254174    9917 status.go:371] ha-144000 host status = "Stopped" (err=<nil>)
	I1205 10:56:16.254178    9917 status.go:384] host is not running, skipping remaining checks
	I1205 10:56:16.254180    9917 status.go:176] ha-144000 status: &{Name:ha-144000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:335: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-144000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-144000 -n ha-144000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-144000 -n ha-144000: exit status 7 (35.168208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-144000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p ha-144000 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-144000 node stop m02 -v=7 --alsologtostderr: exit status 85 (51.18325ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 10:56:16.324704    9921 out.go:345] Setting OutFile to fd 1 ...
	I1205 10:56:16.325157    9921 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:56:16.325161    9921 out.go:358] Setting ErrFile to fd 2...
	I1205 10:56:16.325163    9921 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:56:16.325338    9921 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 10:56:16.325584    9921 mustload.go:65] Loading cluster: ha-144000
	I1205 10:56:16.325809    9921 config.go:182] Loaded profile config "ha-144000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 10:56:16.330265    9921 out.go:201] 
	W1205 10:56:16.333306    9921 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1205 10:56:16.333312    9921 out.go:270] * 
	* 
	W1205 10:56:16.334984    9921 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 10:56:16.338196    9921 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-144000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:371: (dbg) Run:  out/minikube-darwin-arm64 -p ha-144000 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-144000 status -v=7 --alsologtostderr: exit status 7 (35.274292ms)

                                                
                                                
-- stdout --
	ha-144000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 10:56:16.375728    9923 out.go:345] Setting OutFile to fd 1 ...
	I1205 10:56:16.375955    9923 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:56:16.375958    9923 out.go:358] Setting ErrFile to fd 2...
	I1205 10:56:16.375960    9923 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:56:16.376084    9923 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 10:56:16.376222    9923 out.go:352] Setting JSON to false
	I1205 10:56:16.376232    9923 mustload.go:65] Loading cluster: ha-144000
	I1205 10:56:16.376298    9923 notify.go:220] Checking for updates...
	I1205 10:56:16.376439    9923 config.go:182] Loaded profile config "ha-144000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 10:56:16.376446    9923 status.go:174] checking status of ha-144000 ...
	I1205 10:56:16.376698    9923 status.go:371] ha-144000 host status = "Stopped" (err=<nil>)
	I1205 10:56:16.376702    9923 status.go:384] host is not running, skipping remaining checks
	I1205 10:56:16.376704    9923 status.go:176] ha-144000 status: &{Name:ha-144000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-144000 status -v=7 --alsologtostderr": ha-144000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:380: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-144000 status -v=7 --alsologtostderr": ha-144000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:383: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-144000 status -v=7 --alsologtostderr": ha-144000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:386: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-144000 status -v=7 --alsologtostderr": ha-144000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-144000 -n ha-144000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-144000 -n ha-144000: exit status 7 (35.19ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-144000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-144000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-144000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-144000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-144000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-144000 -n ha-144000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-144000 -n ha-144000: exit status 7 (35.3025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-144000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (40.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p ha-144000 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-144000 node start m02 -v=7 --alsologtostderr: exit status 85 (50.327042ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 10:56:16.533986    9932 out.go:345] Setting OutFile to fd 1 ...
	I1205 10:56:16.534421    9932 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:56:16.534425    9932 out.go:358] Setting ErrFile to fd 2...
	I1205 10:56:16.534428    9932 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:56:16.534619    9932 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 10:56:16.534836    9932 mustload.go:65] Loading cluster: ha-144000
	I1205 10:56:16.535013    9932 config.go:182] Loaded profile config "ha-144000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 10:56:16.539253    9932 out.go:201] 
	W1205 10:56:16.542257    9932 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1205 10:56:16.542263    9932 out.go:270] * 
	* 
	W1205 10:56:16.543927    9932 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 10:56:16.547348    9932 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:424: I1205 10:56:16.533986    9932 out.go:345] Setting OutFile to fd 1 ...
I1205 10:56:16.534421    9932 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 10:56:16.534425    9932 out.go:358] Setting ErrFile to fd 2...
I1205 10:56:16.534428    9932 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 10:56:16.534619    9932 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
I1205 10:56:16.534836    9932 mustload.go:65] Loading cluster: ha-144000
I1205 10:56:16.535013    9932 config.go:182] Loaded profile config "ha-144000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1205 10:56:16.539253    9932 out.go:201] 
W1205 10:56:16.542257    9932 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W1205 10:56:16.542263    9932 out.go:270] * 
* 
W1205 10:56:16.543927    9932 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1205 10:56:16.547348    9932 out.go:201] 

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-144000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-144000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-144000 status -v=7 --alsologtostderr: exit status 7 (35.150209ms)

                                                
                                                
-- stdout --
	ha-144000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 10:56:16.584552    9934 out.go:345] Setting OutFile to fd 1 ...
	I1205 10:56:16.584746    9934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:56:16.584749    9934 out.go:358] Setting ErrFile to fd 2...
	I1205 10:56:16.584752    9934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:56:16.584881    9934 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 10:56:16.585023    9934 out.go:352] Setting JSON to false
	I1205 10:56:16.585033    9934 mustload.go:65] Loading cluster: ha-144000
	I1205 10:56:16.585076    9934 notify.go:220] Checking for updates...
	I1205 10:56:16.585263    9934 config.go:182] Loaded profile config "ha-144000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 10:56:16.585270    9934 status.go:174] checking status of ha-144000 ...
	I1205 10:56:16.585518    9934 status.go:371] ha-144000 host status = "Stopped" (err=<nil>)
	I1205 10:56:16.585522    9934 status.go:384] host is not running, skipping remaining checks
	I1205 10:56:16.585524    9934 status.go:176] ha-144000 status: &{Name:ha-144000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1205 10:56:16.586478    9136 retry.go:31] will retry after 1.191143144s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-144000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-144000 status -v=7 --alsologtostderr: exit status 7 (80.848209ms)

                                                
                                                
-- stdout --
	ha-144000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 10:56:17.858806    9936 out.go:345] Setting OutFile to fd 1 ...
	I1205 10:56:17.859013    9936 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:56:17.859017    9936 out.go:358] Setting ErrFile to fd 2...
	I1205 10:56:17.859019    9936 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:56:17.859194    9936 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 10:56:17.859378    9936 out.go:352] Setting JSON to false
	I1205 10:56:17.859393    9936 mustload.go:65] Loading cluster: ha-144000
	I1205 10:56:17.859417    9936 notify.go:220] Checking for updates...
	I1205 10:56:17.859645    9936 config.go:182] Loaded profile config "ha-144000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 10:56:17.859654    9936 status.go:174] checking status of ha-144000 ...
	I1205 10:56:17.859949    9936 status.go:371] ha-144000 host status = "Stopped" (err=<nil>)
	I1205 10:56:17.859953    9936 status.go:384] host is not running, skipping remaining checks
	I1205 10:56:17.859956    9936 status.go:176] ha-144000 status: &{Name:ha-144000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1205 10:56:17.860981    9136 retry.go:31] will retry after 1.77237619s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-144000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-144000 status -v=7 --alsologtostderr: exit status 7 (79.485416ms)

                                                
                                                
-- stdout --
	ha-144000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 10:56:19.713137    9938 out.go:345] Setting OutFile to fd 1 ...
	I1205 10:56:19.713342    9938 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:56:19.713346    9938 out.go:358] Setting ErrFile to fd 2...
	I1205 10:56:19.713349    9938 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:56:19.713507    9938 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 10:56:19.713655    9938 out.go:352] Setting JSON to false
	I1205 10:56:19.713669    9938 mustload.go:65] Loading cluster: ha-144000
	I1205 10:56:19.713708    9938 notify.go:220] Checking for updates...
	I1205 10:56:19.713919    9938 config.go:182] Loaded profile config "ha-144000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 10:56:19.713928    9938 status.go:174] checking status of ha-144000 ...
	I1205 10:56:19.714208    9938 status.go:371] ha-144000 host status = "Stopped" (err=<nil>)
	I1205 10:56:19.714212    9938 status.go:384] host is not running, skipping remaining checks
	I1205 10:56:19.714215    9938 status.go:176] ha-144000 status: &{Name:ha-144000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1205 10:56:19.715193    9136 retry.go:31] will retry after 1.845835411s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-144000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-144000 status -v=7 --alsologtostderr: exit status 7 (78.598833ms)

                                                
                                                
-- stdout --
	ha-144000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 10:56:21.639692    9940 out.go:345] Setting OutFile to fd 1 ...
	I1205 10:56:21.639927    9940 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:56:21.639932    9940 out.go:358] Setting ErrFile to fd 2...
	I1205 10:56:21.639934    9940 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:56:21.640125    9940 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 10:56:21.640283    9940 out.go:352] Setting JSON to false
	I1205 10:56:21.640295    9940 mustload.go:65] Loading cluster: ha-144000
	I1205 10:56:21.640334    9940 notify.go:220] Checking for updates...
	I1205 10:56:21.640543    9940 config.go:182] Loaded profile config "ha-144000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 10:56:21.640552    9940 status.go:174] checking status of ha-144000 ...
	I1205 10:56:21.640841    9940 status.go:371] ha-144000 host status = "Stopped" (err=<nil>)
	I1205 10:56:21.640845    9940 status.go:384] host is not running, skipping remaining checks
	I1205 10:56:21.640848    9940 status.go:176] ha-144000 status: &{Name:ha-144000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1205 10:56:21.641976    9136 retry.go:31] will retry after 2.719147478s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-144000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-144000 status -v=7 --alsologtostderr: exit status 7 (78.414ms)

                                                
                                                
-- stdout --
	ha-144000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 10:56:24.439761    9942 out.go:345] Setting OutFile to fd 1 ...
	I1205 10:56:24.440006    9942 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:56:24.440010    9942 out.go:358] Setting ErrFile to fd 2...
	I1205 10:56:24.440014    9942 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:56:24.440190    9942 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 10:56:24.440357    9942 out.go:352] Setting JSON to false
	I1205 10:56:24.440368    9942 mustload.go:65] Loading cluster: ha-144000
	I1205 10:56:24.440416    9942 notify.go:220] Checking for updates...
	I1205 10:56:24.440614    9942 config.go:182] Loaded profile config "ha-144000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 10:56:24.440623    9942 status.go:174] checking status of ha-144000 ...
	I1205 10:56:24.440926    9942 status.go:371] ha-144000 host status = "Stopped" (err=<nil>)
	I1205 10:56:24.440931    9942 status.go:384] host is not running, skipping remaining checks
	I1205 10:56:24.440933    9942 status.go:176] ha-144000 status: &{Name:ha-144000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1205 10:56:24.441921    9136 retry.go:31] will retry after 4.033679371s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-144000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-144000 status -v=7 --alsologtostderr: exit status 7 (81.236917ms)

                                                
                                                
-- stdout --
	ha-144000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 10:56:28.557003    9946 out.go:345] Setting OutFile to fd 1 ...
	I1205 10:56:28.557239    9946 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:56:28.557244    9946 out.go:358] Setting ErrFile to fd 2...
	I1205 10:56:28.557247    9946 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:56:28.557435    9946 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 10:56:28.557604    9946 out.go:352] Setting JSON to false
	I1205 10:56:28.557617    9946 mustload.go:65] Loading cluster: ha-144000
	I1205 10:56:28.557652    9946 notify.go:220] Checking for updates...
	I1205 10:56:28.557873    9946 config.go:182] Loaded profile config "ha-144000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 10:56:28.557882    9946 status.go:174] checking status of ha-144000 ...
	I1205 10:56:28.558198    9946 status.go:371] ha-144000 host status = "Stopped" (err=<nil>)
	I1205 10:56:28.558202    9946 status.go:384] host is not running, skipping remaining checks
	I1205 10:56:28.558205    9946 status.go:176] ha-144000 status: &{Name:ha-144000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1205 10:56:28.559176    9136 retry.go:31] will retry after 7.83996711s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-144000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-144000 status -v=7 --alsologtostderr: exit status 7 (77.750292ms)

                                                
                                                
-- stdout --
	ha-144000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 10:56:36.477081    9948 out.go:345] Setting OutFile to fd 1 ...
	I1205 10:56:36.477296    9948 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:56:36.477300    9948 out.go:358] Setting ErrFile to fd 2...
	I1205 10:56:36.477303    9948 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:56:36.477443    9948 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 10:56:36.477584    9948 out.go:352] Setting JSON to false
	I1205 10:56:36.477599    9948 mustload.go:65] Loading cluster: ha-144000
	I1205 10:56:36.477631    9948 notify.go:220] Checking for updates...
	I1205 10:56:36.477844    9948 config.go:182] Loaded profile config "ha-144000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 10:56:36.477853    9948 status.go:174] checking status of ha-144000 ...
	I1205 10:56:36.478154    9948 status.go:371] ha-144000 host status = "Stopped" (err=<nil>)
	I1205 10:56:36.478158    9948 status.go:384] host is not running, skipping remaining checks
	I1205 10:56:36.478161    9948 status.go:176] ha-144000 status: &{Name:ha-144000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1205 10:56:36.479196    9136 retry.go:31] will retry after 7.102027669s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-144000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-144000 status -v=7 --alsologtostderr: exit status 7 (77.2515ms)

                                                
                                                
-- stdout --
	ha-144000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 10:56:43.659817    9950 out.go:345] Setting OutFile to fd 1 ...
	I1205 10:56:43.660022    9950 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:56:43.660026    9950 out.go:358] Setting ErrFile to fd 2...
	I1205 10:56:43.660028    9950 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:56:43.660165    9950 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 10:56:43.660313    9950 out.go:352] Setting JSON to false
	I1205 10:56:43.660326    9950 mustload.go:65] Loading cluster: ha-144000
	I1205 10:56:43.660352    9950 notify.go:220] Checking for updates...
	I1205 10:56:43.660589    9950 config.go:182] Loaded profile config "ha-144000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 10:56:43.660600    9950 status.go:174] checking status of ha-144000 ...
	I1205 10:56:43.660883    9950 status.go:371] ha-144000 host status = "Stopped" (err=<nil>)
	I1205 10:56:43.660888    9950 status.go:384] host is not running, skipping remaining checks
	I1205 10:56:43.660890    9950 status.go:176] ha-144000 status: &{Name:ha-144000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1205 10:56:43.661953    9136 retry.go:31] will retry after 13.013172426s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-144000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-144000 status -v=7 --alsologtostderr: exit status 7 (78.656ms)

                                                
                                                
-- stdout --
	ha-144000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 10:56:56.754035    9952 out.go:345] Setting OutFile to fd 1 ...
	I1205 10:56:56.754217    9952 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:56:56.754221    9952 out.go:358] Setting ErrFile to fd 2...
	I1205 10:56:56.754224    9952 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:56:56.754393    9952 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 10:56:56.754541    9952 out.go:352] Setting JSON to false
	I1205 10:56:56.754552    9952 mustload.go:65] Loading cluster: ha-144000
	I1205 10:56:56.754586    9952 notify.go:220] Checking for updates...
	I1205 10:56:56.754794    9952 config.go:182] Loaded profile config "ha-144000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 10:56:56.754803    9952 status.go:174] checking status of ha-144000 ...
	I1205 10:56:56.755093    9952 status.go:371] ha-144000 host status = "Stopped" (err=<nil>)
	I1205 10:56:56.755098    9952 status.go:384] host is not running, skipping remaining checks
	I1205 10:56:56.755100    9952 status.go:176] ha-144000 status: &{Name:ha-144000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:434: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-144000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-144000 -n ha-144000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-144000 -n ha-144000: exit status 7 (36.958042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-144000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (40.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-144000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-144000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-144000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-144000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-144000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-144000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-144000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-144000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-144000 -n ha-144000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-144000 -n ha-144000: exit status 7 (34.561ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-144000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (9.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-144000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-144000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-144000 -v=7 --alsologtostderr: (3.695017208s)
ha_test.go:469: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-144000 --wait=true -v=7 --alsologtostderr
ha_test.go:469: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-144000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.233388834s)

                                                
                                                
-- stdout --
	* [ha-144000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-144000" primary control-plane node in "ha-144000" cluster
	* Restarting existing qemu2 VM for "ha-144000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-144000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 10:57:00.679034    9985 out.go:345] Setting OutFile to fd 1 ...
	I1205 10:57:00.679218    9985 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:57:00.679222    9985 out.go:358] Setting ErrFile to fd 2...
	I1205 10:57:00.679226    9985 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:57:00.679396    9985 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 10:57:00.680592    9985 out.go:352] Setting JSON to false
	I1205 10:57:00.700651    9985 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5192,"bootTime":1733419828,"procs":545,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 10:57:00.700721    9985 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 10:57:00.705616    9985 out.go:177] * [ha-144000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 10:57:00.711592    9985 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 10:57:00.711634    9985 notify.go:220] Checking for updates...
	I1205 10:57:00.719544    9985 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 10:57:00.723404    9985 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 10:57:00.726541    9985 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 10:57:00.729532    9985 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	I1205 10:57:00.732639    9985 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 10:57:00.735851    9985 config.go:182] Loaded profile config "ha-144000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 10:57:00.735902    9985 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 10:57:00.740532    9985 out.go:177] * Using the qemu2 driver based on existing profile
	I1205 10:57:00.747509    9985 start.go:297] selected driver: qemu2
	I1205 10:57:00.747515    9985 start.go:901] validating driver "qemu2" against &{Name:ha-144000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.2 ClusterName:ha-144000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 10:57:00.747570    9985 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 10:57:00.750203    9985 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 10:57:00.750231    9985 cni.go:84] Creating CNI manager for ""
	I1205 10:57:00.750257    9985 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1205 10:57:00.750312    9985 start.go:340] cluster config:
	{Name:ha-144000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-144000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 10:57:00.755116    9985 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 10:57:00.763532    9985 out.go:177] * Starting "ha-144000" primary control-plane node in "ha-144000" cluster
	I1205 10:57:00.767516    9985 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 10:57:00.767533    9985 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 10:57:00.767544    9985 cache.go:56] Caching tarball of preloaded images
	I1205 10:57:00.767627    9985 preload.go:172] Found /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 10:57:00.767632    9985 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 10:57:00.767681    9985 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/ha-144000/config.json ...
	I1205 10:57:00.768175    9985 start.go:360] acquireMachinesLock for ha-144000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 10:57:00.768224    9985 start.go:364] duration metric: took 43µs to acquireMachinesLock for "ha-144000"
	I1205 10:57:00.768233    9985 start.go:96] Skipping create...Using existing machine configuration
	I1205 10:57:00.768238    9985 fix.go:54] fixHost starting: 
	I1205 10:57:00.768361    9985 fix.go:112] recreateIfNeeded on ha-144000: state=Stopped err=<nil>
	W1205 10:57:00.768368    9985 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 10:57:00.775519    9985 out.go:177] * Restarting existing qemu2 VM for "ha-144000" ...
	I1205 10:57:00.779535    9985 qemu.go:418] Using hvf for hardware acceleration
	I1205 10:57:00.779571    9985 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/ha-144000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/ha-144000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/ha-144000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:9b:9d:6d:04:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/ha-144000/disk.qcow2
	I1205 10:57:00.781906    9985 main.go:141] libmachine: STDOUT: 
	I1205 10:57:00.781972    9985 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 10:57:00.782002    9985 fix.go:56] duration metric: took 13.764084ms for fixHost
	I1205 10:57:00.782008    9985 start.go:83] releasing machines lock for "ha-144000", held for 13.779084ms
	W1205 10:57:00.782014    9985 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 10:57:00.782064    9985 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 10:57:00.782068    9985 start.go:729] Will try again in 5 seconds ...
	I1205 10:57:05.784284    9985 start.go:360] acquireMachinesLock for ha-144000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 10:57:05.784807    9985 start.go:364] duration metric: took 393.084µs to acquireMachinesLock for "ha-144000"
	I1205 10:57:05.784936    9985 start.go:96] Skipping create...Using existing machine configuration
	I1205 10:57:05.784955    9985 fix.go:54] fixHost starting: 
	I1205 10:57:05.785644    9985 fix.go:112] recreateIfNeeded on ha-144000: state=Stopped err=<nil>
	W1205 10:57:05.785670    9985 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 10:57:05.796115    9985 out.go:177] * Restarting existing qemu2 VM for "ha-144000" ...
	I1205 10:57:05.799173    9985 qemu.go:418] Using hvf for hardware acceleration
	I1205 10:57:05.799586    9985 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/ha-144000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/ha-144000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/ha-144000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:9b:9d:6d:04:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/ha-144000/disk.qcow2
	I1205 10:57:05.809737    9985 main.go:141] libmachine: STDOUT: 
	I1205 10:57:05.809791    9985 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 10:57:05.809852    9985 fix.go:56] duration metric: took 24.897625ms for fixHost
	I1205 10:57:05.809869    9985 start.go:83] releasing machines lock for "ha-144000", held for 25.043125ms
	W1205 10:57:05.810043    9985 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-144000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-144000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 10:57:05.818190    9985 out.go:201] 
	W1205 10:57:05.821264    9985 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 10:57:05.821291    9985 out.go:270] * 
	* 
	W1205 10:57:05.823774    9985 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 10:57:05.832117    9985 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-144000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-144000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-144000 -n ha-144000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-144000 -n ha-144000: exit status 7 (37.023459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-144000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (9.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-darwin-arm64 -p ha-144000 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-144000 node delete m03 -v=7 --alsologtostderr: exit status 83 (44.43725ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-144000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-144000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 10:57:05.989772    9997 out.go:345] Setting OutFile to fd 1 ...
	I1205 10:57:05.990214    9997 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:57:05.990218    9997 out.go:358] Setting ErrFile to fd 2...
	I1205 10:57:05.990221    9997 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:57:05.990387    9997 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 10:57:05.990626    9997 mustload.go:65] Loading cluster: ha-144000
	I1205 10:57:05.990853    9997 config.go:182] Loaded profile config "ha-144000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 10:57:05.994889    9997 out.go:177] * The control-plane node ha-144000 host is not running: state=Stopped
	I1205 10:57:05.997873    9997 out.go:177]   To start a cluster, run: "minikube start -p ha-144000"

                                                
                                                
** /stderr **
ha_test.go:491: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-144000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:495: (dbg) Run:  out/minikube-darwin-arm64 -p ha-144000 status -v=7 --alsologtostderr
ha_test.go:495: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-144000 status -v=7 --alsologtostderr: exit status 7 (34.907875ms)

                                                
                                                
-- stdout --
	ha-144000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 10:57:06.034853    9999 out.go:345] Setting OutFile to fd 1 ...
	I1205 10:57:06.035057    9999 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:57:06.035060    9999 out.go:358] Setting ErrFile to fd 2...
	I1205 10:57:06.035062    9999 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:57:06.035201    9999 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 10:57:06.035337    9999 out.go:352] Setting JSON to false
	I1205 10:57:06.035350    9999 mustload.go:65] Loading cluster: ha-144000
	I1205 10:57:06.035397    9999 notify.go:220] Checking for updates...
	I1205 10:57:06.035556    9999 config.go:182] Loaded profile config "ha-144000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 10:57:06.035564    9999 status.go:174] checking status of ha-144000 ...
	I1205 10:57:06.035801    9999 status.go:371] ha-144000 host status = "Stopped" (err=<nil>)
	I1205 10:57:06.035804    9999 status.go:384] host is not running, skipping remaining checks
	I1205 10:57:06.035806    9999 status.go:176] ha-144000 status: &{Name:ha-144000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-144000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-144000 -n ha-144000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-144000 -n ha-144000: exit status 7 (34.486333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-144000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-144000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-144000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-144000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-144000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-144000 -n ha-144000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-144000 -n ha-144000: exit status 7 (34.768458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-144000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-darwin-arm64 -p ha-144000 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-darwin-arm64 -p ha-144000 stop -v=7 --alsologtostderr: (3.580507375s)
ha_test.go:539: (dbg) Run:  out/minikube-darwin-arm64 -p ha-144000 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-144000 status -v=7 --alsologtostderr: exit status 7 (73.373291ms)

                                                
                                                
-- stdout --
	ha-144000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 10:57:09.810820   10028 out.go:345] Setting OutFile to fd 1 ...
	I1205 10:57:09.811020   10028 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:57:09.811024   10028 out.go:358] Setting ErrFile to fd 2...
	I1205 10:57:09.811027   10028 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:57:09.811199   10028 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 10:57:09.811374   10028 out.go:352] Setting JSON to false
	I1205 10:57:09.811389   10028 mustload.go:65] Loading cluster: ha-144000
	I1205 10:57:09.811438   10028 notify.go:220] Checking for updates...
	I1205 10:57:09.811675   10028 config.go:182] Loaded profile config "ha-144000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 10:57:09.811684   10028 status.go:174] checking status of ha-144000 ...
	I1205 10:57:09.811983   10028 status.go:371] ha-144000 host status = "Stopped" (err=<nil>)
	I1205 10:57:09.811988   10028 status.go:384] host is not running, skipping remaining checks
	I1205 10:57:09.811990   10028 status.go:176] ha-144000 status: &{Name:ha-144000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-144000 status -v=7 --alsologtostderr": ha-144000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-144000 status -v=7 --alsologtostderr": ha-144000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-144000 status -v=7 --alsologtostderr": ha-144000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-144000 -n ha-144000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-144000 -n ha-144000: exit status 7 (35.664333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-144000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-144000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:562: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-144000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.188568167s)

                                                
                                                
-- stdout --
	* [ha-144000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-144000" primary control-plane node in "ha-144000" cluster
	* Restarting existing qemu2 VM for "ha-144000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-144000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 10:57:09.881289   10032 out.go:345] Setting OutFile to fd 1 ...
	I1205 10:57:09.881487   10032 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:57:09.881490   10032 out.go:358] Setting ErrFile to fd 2...
	I1205 10:57:09.881493   10032 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:57:09.881607   10032 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 10:57:09.882690   10032 out.go:352] Setting JSON to false
	I1205 10:57:09.900878   10032 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5201,"bootTime":1733419828,"procs":543,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 10:57:09.900959   10032 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 10:57:09.906003   10032 out.go:177] * [ha-144000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 10:57:09.913938   10032 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 10:57:09.914002   10032 notify.go:220] Checking for updates...
	I1205 10:57:09.918917   10032 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 10:57:09.921870   10032 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 10:57:09.923221   10032 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 10:57:09.925836   10032 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	I1205 10:57:09.928861   10032 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 10:57:09.932148   10032 config.go:182] Loaded profile config "ha-144000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 10:57:09.932455   10032 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 10:57:09.935818   10032 out.go:177] * Using the qemu2 driver based on existing profile
	I1205 10:57:09.942866   10032 start.go:297] selected driver: qemu2
	I1205 10:57:09.942877   10032 start.go:901] validating driver "qemu2" against &{Name:ha-144000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.2 ClusterName:ha-144000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 10:57:09.942936   10032 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 10:57:09.945568   10032 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 10:57:09.945595   10032 cni.go:84] Creating CNI manager for ""
	I1205 10:57:09.945613   10032 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1205 10:57:09.945680   10032 start.go:340] cluster config:
	{Name:ha-144000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-144000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 10:57:09.950520   10032 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 10:57:09.958864   10032 out.go:177] * Starting "ha-144000" primary control-plane node in "ha-144000" cluster
	I1205 10:57:09.962860   10032 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 10:57:09.962878   10032 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 10:57:09.962896   10032 cache.go:56] Caching tarball of preloaded images
	I1205 10:57:09.962964   10032 preload.go:172] Found /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 10:57:09.962976   10032 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 10:57:09.963043   10032 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/ha-144000/config.json ...
	I1205 10:57:09.963519   10032 start.go:360] acquireMachinesLock for ha-144000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 10:57:09.963549   10032 start.go:364] duration metric: took 24.334µs to acquireMachinesLock for "ha-144000"
	I1205 10:57:09.963558   10032 start.go:96] Skipping create...Using existing machine configuration
	I1205 10:57:09.963563   10032 fix.go:54] fixHost starting: 
	I1205 10:57:09.963682   10032 fix.go:112] recreateIfNeeded on ha-144000: state=Stopped err=<nil>
	W1205 10:57:09.963690   10032 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 10:57:09.971872   10032 out.go:177] * Restarting existing qemu2 VM for "ha-144000" ...
	I1205 10:57:09.975909   10032 qemu.go:418] Using hvf for hardware acceleration
	I1205 10:57:09.975954   10032 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/ha-144000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/ha-144000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/ha-144000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:9b:9d:6d:04:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/ha-144000/disk.qcow2
	I1205 10:57:09.978224   10032 main.go:141] libmachine: STDOUT: 
	I1205 10:57:09.978241   10032 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 10:57:09.978271   10032 fix.go:56] duration metric: took 14.707917ms for fixHost
	I1205 10:57:09.978275   10032 start.go:83] releasing machines lock for "ha-144000", held for 14.722209ms
	W1205 10:57:09.978281   10032 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 10:57:09.978328   10032 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 10:57:09.978332   10032 start.go:729] Will try again in 5 seconds ...
	I1205 10:57:14.980406   10032 start.go:360] acquireMachinesLock for ha-144000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 10:57:14.980765   10032 start.go:364] duration metric: took 293.417µs to acquireMachinesLock for "ha-144000"
	I1205 10:57:14.980908   10032 start.go:96] Skipping create...Using existing machine configuration
	I1205 10:57:14.980926   10032 fix.go:54] fixHost starting: 
	I1205 10:57:14.981666   10032 fix.go:112] recreateIfNeeded on ha-144000: state=Stopped err=<nil>
	W1205 10:57:14.981692   10032 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 10:57:14.986238   10032 out.go:177] * Restarting existing qemu2 VM for "ha-144000" ...
	I1205 10:57:14.993091   10032 qemu.go:418] Using hvf for hardware acceleration
	I1205 10:57:14.993268   10032 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/ha-144000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/ha-144000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/ha-144000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:9b:9d:6d:04:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/ha-144000/disk.qcow2
	I1205 10:57:15.003496   10032 main.go:141] libmachine: STDOUT: 
	I1205 10:57:15.003594   10032 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 10:57:15.003701   10032 fix.go:56] duration metric: took 22.772416ms for fixHost
	I1205 10:57:15.003722   10032 start.go:83] releasing machines lock for "ha-144000", held for 22.931042ms
	W1205 10:57:15.003973   10032 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-144000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-144000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 10:57:15.011140   10032 out.go:201] 
	W1205 10:57:15.015193   10032 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 10:57:15.015227   10032 out.go:270] * 
	* 
	W1205 10:57:15.017643   10032 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 10:57:15.023695   10032 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-144000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-144000 -n ha-144000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-144000 -n ha-144000: exit status 7 (75.989833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-144000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-144000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-144000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-144000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-144000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-144000 -n ha-144000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-144000 -n ha-144000: exit status 7 (34.586959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-144000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-144000 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-144000 --control-plane -v=7 --alsologtostderr: exit status 83 (46.024417ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-144000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-144000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 10:57:15.236721   10047 out.go:345] Setting OutFile to fd 1 ...
	I1205 10:57:15.236903   10047 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:57:15.236906   10047 out.go:358] Setting ErrFile to fd 2...
	I1205 10:57:15.236909   10047 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:57:15.237024   10047 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 10:57:15.237263   10047 mustload.go:65] Loading cluster: ha-144000
	I1205 10:57:15.237483   10047 config.go:182] Loaded profile config "ha-144000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 10:57:15.241996   10047 out.go:177] * The control-plane node ha-144000 host is not running: state=Stopped
	I1205 10:57:15.245853   10047 out.go:177]   To start a cluster, run: "minikube start -p ha-144000"

                                                
                                                
** /stderr **
ha_test.go:609: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-144000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-144000 -n ha-144000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-144000 -n ha-144000: exit status 7 (34.770417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-144000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-144000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-144000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-144000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-144000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-144000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-144000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-144000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-144000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-144000 -n ha-144000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-144000 -n ha-144000: exit status 7 (33.72925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-144000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.09s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.02s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-776000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-776000 --driver=qemu2 : exit status 80 (9.940930291s)

                                                
                                                
-- stdout --
	* [image-776000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-776000" primary control-plane node in "image-776000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-776000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-776000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-776000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-776000 -n image-776000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-776000 -n image-776000: exit status 7 (73.567542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-776000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.02s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.82s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-726000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-726000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.821665708s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9547b6ab-466d-4043-9121-d20ac6650674","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-726000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9ad74bc7-0eef-48d2-9b81-4b77e47954d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20052"}}
	{"specversion":"1.0","id":"5df6afb9-bdbb-487b-a50e-4b4dee50bb60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig"}}
	{"specversion":"1.0","id":"5917572a-2348-4da7-a597-e0febb808433","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"b52d658b-9619-416d-adad-2f53410606cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5bb73ce0-ca65-4065-b0fb-5893f14eee29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube"}}
	{"specversion":"1.0","id":"7d903e81-cb0c-4558-8514-fae2a4131993","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6264c91a-1949-45b8-8ce3-4ba6ba88b6a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"0a4f0a91-4e3c-46c6-b62f-30ea9310016b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"b07f75d1-8aa7-471d-8a76-b4ac2163edeb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-726000\" primary control-plane node in \"json-output-726000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d95574f8-149d-4202-bd4b-98a61c1e734d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"3ecd9bcf-9d4c-45fe-80b9-6a26625107bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-726000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"ec9abf03-96ed-4c29-acc5-cdd7b365e6e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"347ba600-a003-41c1-bf4c-be0fa7625361","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"dc7abc65-52c7-4f68-a396-166af1f315b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-726000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"5d1e2b61-e965-422f-a9ef-09f84b128035","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"db2e0a20-38b7-41ea-82d7-f924a0741d5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-726000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.82s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.09s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-726000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-726000 --output=json --user=testUser: exit status 83 (85.239333ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"683c9464-a332-413c-aa1e-44790853c38a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-726000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"adeb0980-01fc-4313-af7b-d97225864b02","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-726000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-726000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.09s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-726000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-726000 --output=json --user=testUser: exit status 83 (48.192083ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-726000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-726000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-726000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-726000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.37s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-254000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-254000 --driver=qemu2 : exit status 80 (10.048144167s)

                                                
                                                
-- stdout --
	* [first-254000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-254000" primary control-plane node in "first-254000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-254000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-254000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-254000 --driver=qemu2 ": exit status 80
panic.go:629: *** TestMinikubeProfile FAILED at 2024-12-05 10:57:49.357908 -0800 PST m=+474.049028835
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-256000 -n second-256000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-256000 -n second-256000: exit status 85 (87.786541ms)

                                                
                                                
-- stdout --
	* Profile "second-256000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-256000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-256000" host is not running, skipping log retrieval (state="* Profile \"second-256000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-256000\"")
helpers_test.go:175: Cleaning up "second-256000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-256000
panic.go:629: *** TestMinikubeProfile FAILED at 2024-12-05 10:57:49.564679 -0800 PST m=+474.255801710
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-254000 -n first-254000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-254000 -n first-254000: exit status 7 (35.258417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-254000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-254000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-254000
--- FAIL: TestMinikubeProfile (10.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.19s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-809000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-809000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.116050125s)

                                                
                                                
-- stdout --
	* [mount-start-1-809000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-809000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-809000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-809000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-809000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-809000 -n mount-start-1-809000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-809000 -n mount-start-1-809000: exit status 7 (74.627209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-809000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.19s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-454000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-454000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (10.087970333s)

                                                
                                                
-- stdout --
	* [multinode-454000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-454000" primary control-plane node in "multinode-454000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-454000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 10:58:00.111730   10192 out.go:345] Setting OutFile to fd 1 ...
	I1205 10:58:00.111902   10192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:58:00.111905   10192 out.go:358] Setting ErrFile to fd 2...
	I1205 10:58:00.111907   10192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:58:00.112037   10192 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 10:58:00.113170   10192 out.go:352] Setting JSON to false
	I1205 10:58:00.131497   10192 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5252,"bootTime":1733419828,"procs":544,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 10:58:00.131568   10192 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 10:58:00.137499   10192 out.go:177] * [multinode-454000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 10:58:00.145550   10192 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 10:58:00.145585   10192 notify.go:220] Checking for updates...
	I1205 10:58:00.153474   10192 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 10:58:00.156540   10192 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 10:58:00.160493   10192 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 10:58:00.163517   10192 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	I1205 10:58:00.166551   10192 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 10:58:00.169675   10192 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 10:58:00.174494   10192 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 10:58:00.181500   10192 start.go:297] selected driver: qemu2
	I1205 10:58:00.181507   10192 start.go:901] validating driver "qemu2" against <nil>
	I1205 10:58:00.181514   10192 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 10:58:00.184087   10192 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 10:58:00.188490   10192 out.go:177] * Automatically selected the socket_vmnet network
	I1205 10:58:00.191606   10192 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 10:58:00.191639   10192 cni.go:84] Creating CNI manager for ""
	I1205 10:58:00.191660   10192 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1205 10:58:00.191664   10192 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 10:58:00.191701   10192 start.go:340] cluster config:
	{Name:multinode-454000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-454000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 10:58:00.196996   10192 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 10:58:00.205587   10192 out.go:177] * Starting "multinode-454000" primary control-plane node in "multinode-454000" cluster
	I1205 10:58:00.209482   10192 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 10:58:00.209500   10192 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 10:58:00.209519   10192 cache.go:56] Caching tarball of preloaded images
	I1205 10:58:00.209608   10192 preload.go:172] Found /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 10:58:00.209614   10192 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 10:58:00.209850   10192 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/multinode-454000/config.json ...
	I1205 10:58:00.209862   10192 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/multinode-454000/config.json: {Name:mk9e5486e366a28585d66fab800f87c8fdb2bc66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 10:58:00.210331   10192 start.go:360] acquireMachinesLock for multinode-454000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 10:58:00.210383   10192 start.go:364] duration metric: took 45.834µs to acquireMachinesLock for "multinode-454000"
	I1205 10:58:00.210396   10192 start.go:93] Provisioning new machine with config: &{Name:multinode-454000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:multinode-454000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 10:58:00.210426   10192 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 10:58:00.219492   10192 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 10:58:00.238140   10192 start.go:159] libmachine.API.Create for "multinode-454000" (driver="qemu2")
	I1205 10:58:00.238173   10192 client.go:168] LocalClient.Create starting
	I1205 10:58:00.238259   10192 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 10:58:00.238301   10192 main.go:141] libmachine: Decoding PEM data...
	I1205 10:58:00.238312   10192 main.go:141] libmachine: Parsing certificate...
	I1205 10:58:00.238351   10192 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 10:58:00.238384   10192 main.go:141] libmachine: Decoding PEM data...
	I1205 10:58:00.238395   10192 main.go:141] libmachine: Parsing certificate...
	I1205 10:58:00.238800   10192 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 10:58:00.400510   10192 main.go:141] libmachine: Creating SSH key...
	I1205 10:58:00.654196   10192 main.go:141] libmachine: Creating Disk image...
	I1205 10:58:00.654205   10192 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 10:58:00.654436   10192 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/multinode-454000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/multinode-454000/disk.qcow2
	I1205 10:58:00.664823   10192 main.go:141] libmachine: STDOUT: 
	I1205 10:58:00.664843   10192 main.go:141] libmachine: STDERR: 
	I1205 10:58:00.664913   10192 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/multinode-454000/disk.qcow2 +20000M
	I1205 10:58:00.673511   10192 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 10:58:00.673527   10192 main.go:141] libmachine: STDERR: 
	I1205 10:58:00.673551   10192 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/multinode-454000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/multinode-454000/disk.qcow2
	I1205 10:58:00.673556   10192 main.go:141] libmachine: Starting QEMU VM...
	I1205 10:58:00.673568   10192 qemu.go:418] Using hvf for hardware acceleration
	I1205 10:58:00.673595   10192 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/multinode-454000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/multinode-454000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/multinode-454000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:65:99:da:1a:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/multinode-454000/disk.qcow2
	I1205 10:58:00.675491   10192 main.go:141] libmachine: STDOUT: 
	I1205 10:58:00.675505   10192 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 10:58:00.675523   10192 client.go:171] duration metric: took 437.348666ms to LocalClient.Create
	I1205 10:58:02.677677   10192 start.go:128] duration metric: took 2.467255s to createHost
	I1205 10:58:02.677745   10192 start.go:83] releasing machines lock for "multinode-454000", held for 2.4673775s
	W1205 10:58:02.677807   10192 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 10:58:02.693965   10192 out.go:177] * Deleting "multinode-454000" in qemu2 ...
	W1205 10:58:02.722979   10192 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 10:58:02.723015   10192 start.go:729] Will try again in 5 seconds ...
	I1205 10:58:07.725190   10192 start.go:360] acquireMachinesLock for multinode-454000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 10:58:07.725775   10192 start.go:364] duration metric: took 478.042µs to acquireMachinesLock for "multinode-454000"
	I1205 10:58:07.725905   10192 start.go:93] Provisioning new machine with config: &{Name:multinode-454000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:multinode-454000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 10:58:07.726117   10192 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 10:58:07.743675   10192 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 10:58:07.794055   10192 start.go:159] libmachine.API.Create for "multinode-454000" (driver="qemu2")
	I1205 10:58:07.794108   10192 client.go:168] LocalClient.Create starting
	I1205 10:58:07.794230   10192 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 10:58:07.794300   10192 main.go:141] libmachine: Decoding PEM data...
	I1205 10:58:07.794318   10192 main.go:141] libmachine: Parsing certificate...
	I1205 10:58:07.794375   10192 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 10:58:07.794437   10192 main.go:141] libmachine: Decoding PEM data...
	I1205 10:58:07.794447   10192 main.go:141] libmachine: Parsing certificate...
	I1205 10:58:07.795089   10192 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 10:58:07.969581   10192 main.go:141] libmachine: Creating SSH key...
	I1205 10:58:08.099097   10192 main.go:141] libmachine: Creating Disk image...
	I1205 10:58:08.099107   10192 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 10:58:08.099313   10192 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/multinode-454000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/multinode-454000/disk.qcow2
	I1205 10:58:08.109222   10192 main.go:141] libmachine: STDOUT: 
	I1205 10:58:08.109244   10192 main.go:141] libmachine: STDERR: 
	I1205 10:58:08.109300   10192 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/multinode-454000/disk.qcow2 +20000M
	I1205 10:58:08.117695   10192 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 10:58:08.117719   10192 main.go:141] libmachine: STDERR: 
	I1205 10:58:08.117735   10192 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/multinode-454000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/multinode-454000/disk.qcow2
	I1205 10:58:08.117739   10192 main.go:141] libmachine: Starting QEMU VM...
	I1205 10:58:08.117748   10192 qemu.go:418] Using hvf for hardware acceleration
	I1205 10:58:08.117788   10192 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/multinode-454000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/multinode-454000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/multinode-454000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:1a:62:43:09:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/multinode-454000/disk.qcow2
	I1205 10:58:08.119581   10192 main.go:141] libmachine: STDOUT: 
	I1205 10:58:08.119595   10192 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 10:58:08.119610   10192 client.go:171] duration metric: took 325.50175ms to LocalClient.Create
	I1205 10:58:10.120423   10192 start.go:128] duration metric: took 2.394232292s to createHost
	I1205 10:58:10.120501   10192 start.go:83] releasing machines lock for "multinode-454000", held for 2.394725041s
	W1205 10:58:10.120863   10192 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-454000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-454000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 10:58:10.131533   10192 out.go:201] 
	W1205 10:58:10.139694   10192 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 10:58:10.139717   10192 out.go:270] * 
	* 
	W1205 10:58:10.142685   10192 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 10:58:10.151683   10192 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-454000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-454000 -n multinode-454000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-454000 -n multinode-454000: exit status 7 (73.808542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-454000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.16s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (103.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-454000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-454000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (64.75325ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-454000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-454000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-454000 -- rollout status deployment/busybox: exit status 1 (61.708667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-454000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-454000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-454000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (61.87875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-454000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1205 10:58:10.431519    9136 retry.go:31] will retry after 973.029821ms: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-454000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-454000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.324459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-454000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1205 10:58:11.517342    9136 retry.go:31] will retry after 1.501683932s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-454000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-454000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.056792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-454000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1205 10:58:13.129494    9136 retry.go:31] will retry after 1.148091997s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-454000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-454000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.317708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-454000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1205 10:58:14.388254    9136 retry.go:31] will retry after 4.837023653s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-454000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-454000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.745834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-454000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1205 10:58:19.337351    9136 retry.go:31] will retry after 3.354637816s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-454000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-454000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.5445ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-454000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1205 10:58:22.802851    9136 retry.go:31] will retry after 8.410926077s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-454000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-454000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.544208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-454000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1205 10:58:31.325608    9136 retry.go:31] will retry after 7.89728339s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-454000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-454000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.823ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-454000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1205 10:58:39.332695    9136 retry.go:31] will retry after 12.376538266s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-454000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-454000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.689542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-454000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1205 10:58:51.819397    9136 retry.go:31] will retry after 15.6442208s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-454000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-454000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.735375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-454000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1205 10:59:07.575632    9136 retry.go:31] will retry after 45.713716104s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-454000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-454000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.075625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-454000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-454000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-454000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (61.793709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-454000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-454000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-454000 -- exec  -- nslookup kubernetes.io: exit status 1 (61.868875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-454000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-454000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-454000 -- exec  -- nslookup kubernetes.default: exit status 1 (61.91925ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-454000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-454000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-454000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (62.16825ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-454000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-454000 -n multinode-454000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-454000 -n multinode-454000: exit status 7 (34.666583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-454000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (103.44s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-454000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-454000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (62.413875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-454000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-454000 -n multinode-454000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-454000 -n multinode-454000: exit status 7 (35.512959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-454000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-454000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-454000 -v 3 --alsologtostderr: exit status 83 (47.995792ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-454000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-454000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 10:59:53.814276   10280 out.go:345] Setting OutFile to fd 1 ...
	I1205 10:59:53.814477   10280 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:59:53.814480   10280 out.go:358] Setting ErrFile to fd 2...
	I1205 10:59:53.814482   10280 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:59:53.814616   10280 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 10:59:53.814851   10280 mustload.go:65] Loading cluster: multinode-454000
	I1205 10:59:53.815057   10280 config.go:182] Loaded profile config "multinode-454000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 10:59:53.819501   10280 out.go:177] * The control-plane node multinode-454000 host is not running: state=Stopped
	I1205 10:59:53.824637   10280 out.go:177]   To start a cluster, run: "minikube start -p multinode-454000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-454000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-454000 -n multinode-454000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-454000 -n multinode-454000: exit status 7 (35.252208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-454000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-454000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-454000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.966958ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-454000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-454000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-454000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-454000 -n multinode-454000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-454000 -n multinode-454000: exit status 7 (35.667958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-454000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-454000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-454000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"multinode-454000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMN
UMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"multinode-454000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVe
rsion\":\"v1.31.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\"
:\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-454000 -n multinode-454000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-454000 -n multinode-454000: exit status 7 (34.744125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-454000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-454000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-454000 status --output json --alsologtostderr: exit status 7 (34.369833ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-454000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 10:59:54.048154   10292 out.go:345] Setting OutFile to fd 1 ...
	I1205 10:59:54.048324   10292 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:59:54.048327   10292 out.go:358] Setting ErrFile to fd 2...
	I1205 10:59:54.048329   10292 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:59:54.048475   10292 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 10:59:54.048599   10292 out.go:352] Setting JSON to true
	I1205 10:59:54.048609   10292 mustload.go:65] Loading cluster: multinode-454000
	I1205 10:59:54.048667   10292 notify.go:220] Checking for updates...
	I1205 10:59:54.048834   10292 config.go:182] Loaded profile config "multinode-454000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 10:59:54.048841   10292 status.go:174] checking status of multinode-454000 ...
	I1205 10:59:54.049078   10292 status.go:371] multinode-454000 host status = "Stopped" (err=<nil>)
	I1205 10:59:54.049081   10292 status.go:384] host is not running, skipping remaining checks
	I1205 10:59:54.049083   10292 status.go:176] multinode-454000 status: &{Name:multinode-454000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-454000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-454000 -n multinode-454000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-454000 -n multinode-454000: exit status 7 (35.024542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-454000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-454000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-454000 node stop m03: exit status 85 (51.8385ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-454000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-454000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-454000 status: exit status 7 (34.903209ms)

                                                
                                                
-- stdout --
	multinode-454000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-454000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-454000 status --alsologtostderr: exit status 7 (34.552417ms)

                                                
                                                
-- stdout --
	multinode-454000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 10:59:54.205392   10300 out.go:345] Setting OutFile to fd 1 ...
	I1205 10:59:54.205582   10300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:59:54.205584   10300 out.go:358] Setting ErrFile to fd 2...
	I1205 10:59:54.205587   10300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:59:54.205717   10300 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 10:59:54.205844   10300 out.go:352] Setting JSON to false
	I1205 10:59:54.205853   10300 mustload.go:65] Loading cluster: multinode-454000
	I1205 10:59:54.205919   10300 notify.go:220] Checking for updates...
	I1205 10:59:54.206061   10300 config.go:182] Loaded profile config "multinode-454000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 10:59:54.206069   10300 status.go:174] checking status of multinode-454000 ...
	I1205 10:59:54.206317   10300 status.go:371] multinode-454000 host status = "Stopped" (err=<nil>)
	I1205 10:59:54.206321   10300 status.go:384] host is not running, skipping remaining checks
	I1205 10:59:54.206323   10300 status.go:176] multinode-454000 status: &{Name:multinode-454000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-454000 status --alsologtostderr": multinode-454000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-454000 -n multinode-454000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-454000 -n multinode-454000: exit status 7 (34.375042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-454000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.16s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (50.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-454000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-454000 node start m03 -v=7 --alsologtostderr: exit status 85 (52.481917ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 10:59:54.275304   10304 out.go:345] Setting OutFile to fd 1 ...
	I1205 10:59:54.275728   10304 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:59:54.275732   10304 out.go:358] Setting ErrFile to fd 2...
	I1205 10:59:54.275734   10304 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:59:54.275909   10304 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 10:59:54.276122   10304 mustload.go:65] Loading cluster: multinode-454000
	I1205 10:59:54.276304   10304 config.go:182] Loaded profile config "multinode-454000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 10:59:54.280711   10304 out.go:201] 
	W1205 10:59:54.284638   10304 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W1205 10:59:54.284649   10304 out.go:270] * 
	* 
	W1205 10:59:54.286425   10304 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 10:59:54.289673   10304 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I1205 10:59:54.275304   10304 out.go:345] Setting OutFile to fd 1 ...
I1205 10:59:54.275728   10304 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 10:59:54.275732   10304 out.go:358] Setting ErrFile to fd 2...
I1205 10:59:54.275734   10304 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 10:59:54.275909   10304 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
I1205 10:59:54.276122   10304 mustload.go:65] Loading cluster: multinode-454000
I1205 10:59:54.276304   10304 config.go:182] Loaded profile config "multinode-454000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1205 10:59:54.280711   10304 out.go:201] 
W1205 10:59:54.284638   10304 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W1205 10:59:54.284649   10304 out.go:270] * 
* 
W1205 10:59:54.286425   10304 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1205 10:59:54.289673   10304 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-454000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-454000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-454000 status -v=7 --alsologtostderr: exit status 7 (35.615459ms)

                                                
                                                
-- stdout --
	multinode-454000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 10:59:54.327426   10306 out.go:345] Setting OutFile to fd 1 ...
	I1205 10:59:54.327609   10306 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:59:54.327612   10306 out.go:358] Setting ErrFile to fd 2...
	I1205 10:59:54.327614   10306 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:59:54.327751   10306 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 10:59:54.327883   10306 out.go:352] Setting JSON to false
	I1205 10:59:54.327896   10306 mustload.go:65] Loading cluster: multinode-454000
	I1205 10:59:54.328155   10306 notify.go:220] Checking for updates...
	I1205 10:59:54.328962   10306 config.go:182] Loaded profile config "multinode-454000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 10:59:54.328976   10306 status.go:174] checking status of multinode-454000 ...
	I1205 10:59:54.329211   10306 status.go:371] multinode-454000 host status = "Stopped" (err=<nil>)
	I1205 10:59:54.329216   10306 status.go:384] host is not running, skipping remaining checks
	I1205 10:59:54.329218   10306 status.go:176] multinode-454000 status: &{Name:multinode-454000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1205 10:59:54.330253    9136 retry.go:31] will retry after 837.482355ms: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-454000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-454000 status -v=7 --alsologtostderr: exit status 7 (77.928584ms)

                                                
                                                
-- stdout --
	multinode-454000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 10:59:55.245805   10310 out.go:345] Setting OutFile to fd 1 ...
	I1205 10:59:55.246033   10310 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:59:55.246038   10310 out.go:358] Setting ErrFile to fd 2...
	I1205 10:59:55.246041   10310 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:59:55.246210   10310 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 10:59:55.246379   10310 out.go:352] Setting JSON to false
	I1205 10:59:55.246392   10310 mustload.go:65] Loading cluster: multinode-454000
	I1205 10:59:55.246438   10310 notify.go:220] Checking for updates...
	I1205 10:59:55.246675   10310 config.go:182] Loaded profile config "multinode-454000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 10:59:55.246684   10310 status.go:174] checking status of multinode-454000 ...
	I1205 10:59:55.246999   10310 status.go:371] multinode-454000 host status = "Stopped" (err=<nil>)
	I1205 10:59:55.247004   10310 status.go:384] host is not running, skipping remaining checks
	I1205 10:59:55.247007   10310 status.go:176] multinode-454000 status: &{Name:multinode-454000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1205 10:59:55.248038    9136 retry.go:31] will retry after 1.695232531s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-454000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-454000 status -v=7 --alsologtostderr: exit status 7 (77.645125ms)

                                                
                                                
-- stdout --
	multinode-454000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 10:59:57.021198   10312 out.go:345] Setting OutFile to fd 1 ...
	I1205 10:59:57.021411   10312 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:59:57.021415   10312 out.go:358] Setting ErrFile to fd 2...
	I1205 10:59:57.021418   10312 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:59:57.021600   10312 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 10:59:57.021755   10312 out.go:352] Setting JSON to false
	I1205 10:59:57.021766   10312 mustload.go:65] Loading cluster: multinode-454000
	I1205 10:59:57.021798   10312 notify.go:220] Checking for updates...
	I1205 10:59:57.022001   10312 config.go:182] Loaded profile config "multinode-454000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 10:59:57.022010   10312 status.go:174] checking status of multinode-454000 ...
	I1205 10:59:57.022293   10312 status.go:371] multinode-454000 host status = "Stopped" (err=<nil>)
	I1205 10:59:57.022298   10312 status.go:384] host is not running, skipping remaining checks
	I1205 10:59:57.022300   10312 status.go:176] multinode-454000 status: &{Name:multinode-454000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1205 10:59:57.023347    9136 retry.go:31] will retry after 3.344953498s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-454000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-454000 status -v=7 --alsologtostderr: exit status 7 (78.774334ms)

                                                
                                                
-- stdout --
	multinode-454000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:00:00.447181   10314 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:00:00.447416   10314 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:00:00.447420   10314 out.go:358] Setting ErrFile to fd 2...
	I1205 11:00:00.447423   10314 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:00:00.447604   10314 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:00:00.447760   10314 out.go:352] Setting JSON to false
	I1205 11:00:00.447773   10314 mustload.go:65] Loading cluster: multinode-454000
	I1205 11:00:00.447821   10314 notify.go:220] Checking for updates...
	I1205 11:00:00.448074   10314 config.go:182] Loaded profile config "multinode-454000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:00:00.448086   10314 status.go:174] checking status of multinode-454000 ...
	I1205 11:00:00.448383   10314 status.go:371] multinode-454000 host status = "Stopped" (err=<nil>)
	I1205 11:00:00.448388   10314 status.go:384] host is not running, skipping remaining checks
	I1205 11:00:00.448391   10314 status.go:176] multinode-454000 status: &{Name:multinode-454000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1205 11:00:00.449460    9136 retry.go:31] will retry after 3.276201882s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-454000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-454000 status -v=7 --alsologtostderr: exit status 7 (72.302625ms)

                                                
                                                
-- stdout --
	multinode-454000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:00:03.798063   10598 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:00:03.798288   10598 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:00:03.798292   10598 out.go:358] Setting ErrFile to fd 2...
	I1205 11:00:03.798295   10598 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:00:03.798450   10598 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:00:03.798611   10598 out.go:352] Setting JSON to false
	I1205 11:00:03.798623   10598 mustload.go:65] Loading cluster: multinode-454000
	I1205 11:00:03.798669   10598 notify.go:220] Checking for updates...
	I1205 11:00:03.798875   10598 config.go:182] Loaded profile config "multinode-454000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:00:03.798883   10598 status.go:174] checking status of multinode-454000 ...
	I1205 11:00:03.799190   10598 status.go:371] multinode-454000 host status = "Stopped" (err=<nil>)
	I1205 11:00:03.799194   10598 status.go:384] host is not running, skipping remaining checks
	I1205 11:00:03.799197   10598 status.go:176] multinode-454000 status: &{Name:multinode-454000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1205 11:00:03.800371    9136 retry.go:31] will retry after 5.944224585s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-454000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-454000 status -v=7 --alsologtostderr: exit status 7 (79.802083ms)

                                                
                                                
-- stdout --
	multinode-454000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:00:09.824565   10613 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:00:09.824773   10613 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:00:09.824777   10613 out.go:358] Setting ErrFile to fd 2...
	I1205 11:00:09.824781   10613 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:00:09.824937   10613 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:00:09.825083   10613 out.go:352] Setting JSON to false
	I1205 11:00:09.825096   10613 mustload.go:65] Loading cluster: multinode-454000
	I1205 11:00:09.825146   10613 notify.go:220] Checking for updates...
	I1205 11:00:09.825374   10613 config.go:182] Loaded profile config "multinode-454000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:00:09.825384   10613 status.go:174] checking status of multinode-454000 ...
	I1205 11:00:09.825675   10613 status.go:371] multinode-454000 host status = "Stopped" (err=<nil>)
	I1205 11:00:09.825680   10613 status.go:384] host is not running, skipping remaining checks
	I1205 11:00:09.825683   10613 status.go:176] multinode-454000 status: &{Name:multinode-454000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1205 11:00:09.826688    9136 retry.go:31] will retry after 11.267306577s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-454000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-454000 status -v=7 --alsologtostderr: exit status 7 (77.944958ms)

                                                
                                                
-- stdout --
	multinode-454000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:00:21.172065   10620 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:00:21.172307   10620 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:00:21.172311   10620 out.go:358] Setting ErrFile to fd 2...
	I1205 11:00:21.172314   10620 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:00:21.172481   10620 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:00:21.172643   10620 out.go:352] Setting JSON to false
	I1205 11:00:21.172659   10620 mustload.go:65] Loading cluster: multinode-454000
	I1205 11:00:21.172702   10620 notify.go:220] Checking for updates...
	I1205 11:00:21.172940   10620 config.go:182] Loaded profile config "multinode-454000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:00:21.172948   10620 status.go:174] checking status of multinode-454000 ...
	I1205 11:00:21.173267   10620 status.go:371] multinode-454000 host status = "Stopped" (err=<nil>)
	I1205 11:00:21.173272   10620 status.go:384] host is not running, skipping remaining checks
	I1205 11:00:21.173275   10620 status.go:176] multinode-454000 status: &{Name:multinode-454000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1205 11:00:21.174302    9136 retry.go:31] will retry after 7.311924804s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-454000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-454000 status -v=7 --alsologtostderr: exit status 7 (77.073792ms)

                                                
                                                
-- stdout --
	multinode-454000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:00:28.563512   10624 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:00:28.563749   10624 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:00:28.563753   10624 out.go:358] Setting ErrFile to fd 2...
	I1205 11:00:28.563757   10624 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:00:28.563927   10624 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:00:28.564093   10624 out.go:352] Setting JSON to false
	I1205 11:00:28.564104   10624 mustload.go:65] Loading cluster: multinode-454000
	I1205 11:00:28.564147   10624 notify.go:220] Checking for updates...
	I1205 11:00:28.564393   10624 config.go:182] Loaded profile config "multinode-454000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:00:28.564403   10624 status.go:174] checking status of multinode-454000 ...
	I1205 11:00:28.564680   10624 status.go:371] multinode-454000 host status = "Stopped" (err=<nil>)
	I1205 11:00:28.564685   10624 status.go:384] host is not running, skipping remaining checks
	I1205 11:00:28.564687   10624 status.go:176] multinode-454000 status: &{Name:multinode-454000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1205 11:00:28.565689    9136 retry.go:31] will retry after 16.283141206s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-454000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-454000 status -v=7 --alsologtostderr: exit status 7 (80.509709ms)

                                                
                                                
-- stdout --
	multinode-454000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:00:44.929400   10628 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:00:44.929632   10628 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:00:44.929636   10628 out.go:358] Setting ErrFile to fd 2...
	I1205 11:00:44.929639   10628 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:00:44.929801   10628 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:00:44.929976   10628 out.go:352] Setting JSON to false
	I1205 11:00:44.929989   10628 mustload.go:65] Loading cluster: multinode-454000
	I1205 11:00:44.930051   10628 notify.go:220] Checking for updates...
	I1205 11:00:44.930266   10628 config.go:182] Loaded profile config "multinode-454000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:00:44.930275   10628 status.go:174] checking status of multinode-454000 ...
	I1205 11:00:44.930589   10628 status.go:371] multinode-454000 host status = "Stopped" (err=<nil>)
	I1205 11:00:44.930593   10628 status.go:384] host is not running, skipping remaining checks
	I1205 11:00:44.930596   10628 status.go:176] multinode-454000 status: &{Name:multinode-454000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-454000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-454000 -n multinode-454000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-454000 -n multinode-454000: exit status 7 (36.884667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-454000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (50.73s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (7.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-454000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-454000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-454000: (1.895145209s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-454000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-454000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.231556125s)

                                                
                                                
-- stdout --
	* [multinode-454000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-454000" primary control-plane node in "multinode-454000" cluster
	* Restarting existing qemu2 VM for "multinode-454000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-454000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:00:46.965391   10644 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:00:46.965603   10644 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:00:46.965607   10644 out.go:358] Setting ErrFile to fd 2...
	I1205 11:00:46.965610   10644 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:00:46.965780   10644 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:00:46.967007   10644 out.go:352] Setting JSON to false
	I1205 11:00:46.986947   10644 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5418,"bootTime":1733419828,"procs":549,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 11:00:46.987023   10644 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:00:46.992104   10644 out.go:177] * [multinode-454000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:00:46.998966   10644 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 11:00:46.999024   10644 notify.go:220] Checking for updates...
	I1205 11:00:47.005935   10644 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 11:00:47.009004   10644 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:00:47.011996   10644 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:00:47.014989   10644 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	I1205 11:00:47.017958   10644 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:00:47.021365   10644 config.go:182] Loaded profile config "multinode-454000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:00:47.021425   10644 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:00:47.025903   10644 out.go:177] * Using the qemu2 driver based on existing profile
	I1205 11:00:47.032991   10644 start.go:297] selected driver: qemu2
	I1205 11:00:47.032996   10644 start.go:901] validating driver "qemu2" against &{Name:multinode-454000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:multinode-454000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:00:47.033040   10644 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:00:47.035602   10644 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:00:47.035627   10644 cni.go:84] Creating CNI manager for ""
	I1205 11:00:47.035649   10644 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1205 11:00:47.035703   10644 start.go:340] cluster config:
	{Name:multinode-454000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-454000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:00:47.040370   10644 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:00:47.048978   10644 out.go:177] * Starting "multinode-454000" primary control-plane node in "multinode-454000" cluster
	I1205 11:00:47.052826   10644 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:00:47.052844   10644 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:00:47.052861   10644 cache.go:56] Caching tarball of preloaded images
	I1205 11:00:47.052948   10644 preload.go:172] Found /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:00:47.052954   10644 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:00:47.053012   10644 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/multinode-454000/config.json ...
	I1205 11:00:47.053497   10644 start.go:360] acquireMachinesLock for multinode-454000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:00:47.053548   10644 start.go:364] duration metric: took 44.458µs to acquireMachinesLock for "multinode-454000"
	I1205 11:00:47.053557   10644 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:00:47.053562   10644 fix.go:54] fixHost starting: 
	I1205 11:00:47.053689   10644 fix.go:112] recreateIfNeeded on multinode-454000: state=Stopped err=<nil>
	W1205 11:00:47.053696   10644 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:00:47.057067   10644 out.go:177] * Restarting existing qemu2 VM for "multinode-454000" ...
	I1205 11:00:47.064937   10644 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:00:47.064981   10644 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/multinode-454000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/multinode-454000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/multinode-454000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:1a:62:43:09:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/multinode-454000/disk.qcow2
	I1205 11:00:47.067253   10644 main.go:141] libmachine: STDOUT: 
	I1205 11:00:47.067277   10644 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:00:47.067305   10644 fix.go:56] duration metric: took 13.742541ms for fixHost
	I1205 11:00:47.067310   10644 start.go:83] releasing machines lock for "multinode-454000", held for 13.756959ms
	W1205 11:00:47.067315   10644 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:00:47.067351   10644 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:00:47.067356   10644 start.go:729] Will try again in 5 seconds ...
	I1205 11:00:52.069608   10644 start.go:360] acquireMachinesLock for multinode-454000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:00:52.070138   10644 start.go:364] duration metric: took 413.25µs to acquireMachinesLock for "multinode-454000"
	I1205 11:00:52.070308   10644 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:00:52.070329   10644 fix.go:54] fixHost starting: 
	I1205 11:00:52.071031   10644 fix.go:112] recreateIfNeeded on multinode-454000: state=Stopped err=<nil>
	W1205 11:00:52.071056   10644 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:00:52.079405   10644 out.go:177] * Restarting existing qemu2 VM for "multinode-454000" ...
	I1205 11:00:52.083434   10644 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:00:52.083670   10644 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/multinode-454000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/multinode-454000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/multinode-454000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:1a:62:43:09:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/multinode-454000/disk.qcow2
	I1205 11:00:52.094297   10644 main.go:141] libmachine: STDOUT: 
	I1205 11:00:52.094346   10644 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:00:52.094419   10644 fix.go:56] duration metric: took 24.093459ms for fixHost
	I1205 11:00:52.094438   10644 start.go:83] releasing machines lock for "multinode-454000", held for 24.279541ms
	W1205 11:00:52.094638   10644 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-454000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-454000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:00:52.102417   10644 out.go:201] 
	W1205 11:00:52.106503   10644 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:00:52.106522   10644 out.go:270] * 
	* 
	W1205 11:00:52.108577   10644 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:00:52.117389   10644 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-454000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-454000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-454000 -n multinode-454000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-454000 -n multinode-454000: exit status 7 (36.271292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-454000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (7.27s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-454000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-454000 node delete m03: exit status 83 (44.284375ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-454000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-454000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-454000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-454000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-454000 status --alsologtostderr: exit status 7 (34.66075ms)

                                                
                                                
-- stdout --
	multinode-454000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:00:52.317844   10658 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:00:52.318025   10658 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:00:52.318029   10658 out.go:358] Setting ErrFile to fd 2...
	I1205 11:00:52.318031   10658 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:00:52.318166   10658 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:00:52.318285   10658 out.go:352] Setting JSON to false
	I1205 11:00:52.318296   10658 mustload.go:65] Loading cluster: multinode-454000
	I1205 11:00:52.318334   10658 notify.go:220] Checking for updates...
	I1205 11:00:52.318503   10658 config.go:182] Loaded profile config "multinode-454000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:00:52.318511   10658 status.go:174] checking status of multinode-454000 ...
	I1205 11:00:52.318762   10658 status.go:371] multinode-454000 host status = "Stopped" (err=<nil>)
	I1205 11:00:52.318765   10658 status.go:384] host is not running, skipping remaining checks
	I1205 11:00:52.318767   10658 status.go:176] multinode-454000 status: &{Name:multinode-454000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-454000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-454000 -n multinode-454000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-454000 -n multinode-454000: exit status 7 (34.946208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-454000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-454000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-454000 stop: (3.789555042s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-454000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-454000 status: exit status 7 (74.423833ms)

                                                
                                                
-- stdout --
	multinode-454000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-454000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-454000 status --alsologtostderr: exit status 7 (36.399709ms)

                                                
                                                
-- stdout --
	multinode-454000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:00:56.253838   10684 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:00:56.254015   10684 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:00:56.254018   10684 out.go:358] Setting ErrFile to fd 2...
	I1205 11:00:56.254020   10684 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:00:56.254167   10684 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:00:56.254293   10684 out.go:352] Setting JSON to false
	I1205 11:00:56.254302   10684 mustload.go:65] Loading cluster: multinode-454000
	I1205 11:00:56.254372   10684 notify.go:220] Checking for updates...
	I1205 11:00:56.254505   10684 config.go:182] Loaded profile config "multinode-454000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:00:56.254512   10684 status.go:174] checking status of multinode-454000 ...
	I1205 11:00:56.254759   10684 status.go:371] multinode-454000 host status = "Stopped" (err=<nil>)
	I1205 11:00:56.254762   10684 status.go:384] host is not running, skipping remaining checks
	I1205 11:00:56.254764   10684 status.go:176] multinode-454000 status: &{Name:multinode-454000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-454000 status --alsologtostderr": multinode-454000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-454000 status --alsologtostderr": multinode-454000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-454000 -n multinode-454000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-454000 -n multinode-454000: exit status 7 (34.874125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-454000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.94s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-454000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-454000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.19674725s)

                                                
                                                
-- stdout --
	* [multinode-454000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-454000" primary control-plane node in "multinode-454000" cluster
	* Restarting existing qemu2 VM for "multinode-454000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-454000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:00:56.322598   10689 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:00:56.322753   10689 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:00:56.322757   10689 out.go:358] Setting ErrFile to fd 2...
	I1205 11:00:56.322760   10689 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:00:56.322909   10689 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:00:56.324179   10689 out.go:352] Setting JSON to false
	I1205 11:00:56.342196   10689 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5428,"bootTime":1733419828,"procs":551,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 11:00:56.342267   10689 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:00:56.347104   10689 out.go:177] * [multinode-454000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:00:56.355074   10689 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 11:00:56.355116   10689 notify.go:220] Checking for updates...
	I1205 11:00:56.363035   10689 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 11:00:56.367033   10689 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:00:56.371072   10689 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:00:56.374062   10689 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	I1205 11:00:56.377052   10689 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:00:56.380382   10689 config.go:182] Loaded profile config "multinode-454000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:00:56.380661   10689 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:00:56.385079   10689 out.go:177] * Using the qemu2 driver based on existing profile
	I1205 11:00:56.392042   10689 start.go:297] selected driver: qemu2
	I1205 11:00:56.392048   10689 start.go:901] validating driver "qemu2" against &{Name:multinode-454000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:multinode-454000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:00:56.392104   10689 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:00:56.394770   10689 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:00:56.394795   10689 cni.go:84] Creating CNI manager for ""
	I1205 11:00:56.394815   10689 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1205 11:00:56.394876   10689 start.go:340] cluster config:
	{Name:multinode-454000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-454000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:00:56.399384   10689 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:00:56.408005   10689 out.go:177] * Starting "multinode-454000" primary control-plane node in "multinode-454000" cluster
	I1205 11:00:56.412019   10689 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:00:56.412037   10689 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:00:56.412053   10689 cache.go:56] Caching tarball of preloaded images
	I1205 11:00:56.412137   10689 preload.go:172] Found /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:00:56.412147   10689 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:00:56.412204   10689 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/multinode-454000/config.json ...
	I1205 11:00:56.412703   10689 start.go:360] acquireMachinesLock for multinode-454000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:00:56.412763   10689 start.go:364] duration metric: took 53.709µs to acquireMachinesLock for "multinode-454000"
	I1205 11:00:56.412772   10689 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:00:56.412777   10689 fix.go:54] fixHost starting: 
	I1205 11:00:56.412900   10689 fix.go:112] recreateIfNeeded on multinode-454000: state=Stopped err=<nil>
	W1205 11:00:56.412908   10689 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:00:56.421038   10689 out.go:177] * Restarting existing qemu2 VM for "multinode-454000" ...
	I1205 11:00:56.424839   10689 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:00:56.424882   10689 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/multinode-454000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/multinode-454000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/multinode-454000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:1a:62:43:09:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/multinode-454000/disk.qcow2
	I1205 11:00:56.427213   10689 main.go:141] libmachine: STDOUT: 
	I1205 11:00:56.427234   10689 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:00:56.427265   10689 fix.go:56] duration metric: took 14.487584ms for fixHost
	I1205 11:00:56.427270   10689 start.go:83] releasing machines lock for "multinode-454000", held for 14.50275ms
	W1205 11:00:56.427276   10689 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:00:56.427319   10689 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:00:56.427324   10689 start.go:729] Will try again in 5 seconds ...
	I1205 11:01:01.428985   10689 start.go:360] acquireMachinesLock for multinode-454000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:01:01.429522   10689 start.go:364] duration metric: took 414µs to acquireMachinesLock for "multinode-454000"
	I1205 11:01:01.429665   10689 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:01:01.429684   10689 fix.go:54] fixHost starting: 
	I1205 11:01:01.430447   10689 fix.go:112] recreateIfNeeded on multinode-454000: state=Stopped err=<nil>
	W1205 11:01:01.430479   10689 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:01:01.436168   10689 out.go:177] * Restarting existing qemu2 VM for "multinode-454000" ...
	I1205 11:01:01.443041   10689 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:01:01.443301   10689 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/multinode-454000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/multinode-454000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/multinode-454000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:1a:62:43:09:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/multinode-454000/disk.qcow2
	I1205 11:01:01.453559   10689 main.go:141] libmachine: STDOUT: 
	I1205 11:01:01.453610   10689 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:01:01.453685   10689 fix.go:56] duration metric: took 23.992875ms for fixHost
	I1205 11:01:01.453699   10689 start.go:83] releasing machines lock for "multinode-454000", held for 24.155125ms
	W1205 11:01:01.454239   10689 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-454000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-454000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:01:01.462017   10689 out.go:201] 
	W1205 11:01:01.466215   10689 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:01:01.466308   10689 out.go:270] * 
	* 
	W1205 11:01:01.468895   10689 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:01:01.474552   10689 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-454000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-454000 -n multinode-454000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-454000 -n multinode-454000: exit status 7 (72.573375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-454000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.27s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-454000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-454000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-454000-m01 --driver=qemu2 : exit status 80 (9.801614542s)

                                                
                                                
-- stdout --
	* [multinode-454000-m01] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-454000-m01" primary control-plane node in "multinode-454000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-454000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-454000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-454000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-454000-m02 --driver=qemu2 : exit status 80 (10.099427584s)

                                                
                                                
-- stdout --
	* [multinode-454000-m02] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-454000-m02" primary control-plane node in "multinode-454000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-454000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-454000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-454000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-454000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-454000: exit status 83 (90.026375ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-454000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-454000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-454000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-454000 -n multinode-454000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-454000 -n multinode-454000: exit status 7 (35.051667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-454000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.15s)

                                                
                                    
x
+
TestPreload (10.05s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-056000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-056000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.88870675s)

                                                
                                                
-- stdout --
	* [test-preload-056000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-056000" primary control-plane node in "test-preload-056000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-056000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:01:21.854190   10746 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:01:21.854364   10746 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:01:21.854367   10746 out.go:358] Setting ErrFile to fd 2...
	I1205 11:01:21.854370   10746 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:01:21.854500   10746 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:01:21.855679   10746 out.go:352] Setting JSON to false
	I1205 11:01:21.873335   10746 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5453,"bootTime":1733419828,"procs":544,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 11:01:21.873418   10746 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:01:21.880211   10746 out.go:177] * [test-preload-056000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:01:21.888074   10746 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 11:01:21.888124   10746 notify.go:220] Checking for updates...
	I1205 11:01:21.897082   10746 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 11:01:21.900113   10746 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:01:21.904109   10746 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:01:21.907111   10746 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	I1205 11:01:21.910094   10746 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:01:21.913526   10746 config.go:182] Loaded profile config "multinode-454000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:01:21.913582   10746 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:01:21.918047   10746 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:01:21.925116   10746 start.go:297] selected driver: qemu2
	I1205 11:01:21.925123   10746 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:01:21.925131   10746 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:01:21.927693   10746 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:01:21.932012   10746 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:01:21.935124   10746 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:01:21.935144   10746 cni.go:84] Creating CNI manager for ""
	I1205 11:01:21.935166   10746 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:01:21.935170   10746 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 11:01:21.935204   10746 start.go:340] cluster config:
	{Name:test-preload-056000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-056000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:01:21.940254   10746 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:01:21.948083   10746 out.go:177] * Starting "test-preload-056000" primary control-plane node in "test-preload-056000" cluster
	I1205 11:01:21.952121   10746 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I1205 11:01:21.952208   10746 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/test-preload-056000/config.json ...
	I1205 11:01:21.952232   10746 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/test-preload-056000/config.json: {Name:mkfecfe4d341be1c27f26ca89bb7274ecd1517e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:01:21.952234   10746 cache.go:107] acquiring lock: {Name:mk56ef6acfa7cd75b366303337f3c39cd8cd2884 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:01:21.952233   10746 cache.go:107] acquiring lock: {Name:mk25e8524c7a11929b56e532b1f7fd5a0db79d9b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:01:21.952242   10746 cache.go:107] acquiring lock: {Name:mkb71b030ceaab4ea93bc46b338ae109987e3d15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:01:21.952263   10746 cache.go:107] acquiring lock: {Name:mk8482c459cb73a0904b4e1e171bced1a47e3bda Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:01:21.952266   10746 cache.go:107] acquiring lock: {Name:mkc90adba1e7bf6223a4f819ae6c26ced7e9ca7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:01:21.952282   10746 cache.go:107] acquiring lock: {Name:mk5a07a040ba2059fc1862d6ef9e1ed45d0dbf63 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:01:21.952505   10746 cache.go:107] acquiring lock: {Name:mkf6beff0e15e8b41f4273d7602af811ff7e7b71 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:01:21.952519   10746 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1205 11:01:21.952543   10746 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1205 11:01:21.952614   10746 cache.go:107] acquiring lock: {Name:mk1ce616bd0738c1025fc6ff4a411e0852a0e0c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:01:21.952522   10746 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1205 11:01:21.952886   10746 start.go:360] acquireMachinesLock for test-preload-056000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:01:21.952928   10746 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1205 11:01:21.952990   10746 start.go:364] duration metric: took 97.875µs to acquireMachinesLock for "test-preload-056000"
	I1205 11:01:21.952999   10746 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1205 11:01:21.953004   10746 start.go:93] Provisioning new machine with config: &{Name:test-preload-056000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-056000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:01:21.953039   10746 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:01:21.953042   10746 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1205 11:01:21.953125   10746 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 11:01:21.953147   10746 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1205 11:01:21.957112   10746 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 11:01:21.962008   10746 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1205 11:01:21.962024   10746 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1205 11:01:21.962050   10746 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1205 11:01:21.962078   10746 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1205 11:01:21.962494   10746 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1205 11:01:21.962743   10746 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1205 11:01:21.962808   10746 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1205 11:01:21.962831   10746 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 11:01:21.975080   10746 start.go:159] libmachine.API.Create for "test-preload-056000" (driver="qemu2")
	I1205 11:01:21.975108   10746 client.go:168] LocalClient.Create starting
	I1205 11:01:21.975185   10746 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 11:01:21.975222   10746 main.go:141] libmachine: Decoding PEM data...
	I1205 11:01:21.975235   10746 main.go:141] libmachine: Parsing certificate...
	I1205 11:01:21.975272   10746 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 11:01:21.975302   10746 main.go:141] libmachine: Decoding PEM data...
	I1205 11:01:21.975311   10746 main.go:141] libmachine: Parsing certificate...
	I1205 11:01:21.975687   10746 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:01:22.144464   10746 main.go:141] libmachine: Creating SSH key...
	I1205 11:01:22.183872   10746 main.go:141] libmachine: Creating Disk image...
	I1205 11:01:22.183890   10746 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:01:22.184119   10746 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/test-preload-056000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/test-preload-056000/disk.qcow2
	I1205 11:01:22.193623   10746 main.go:141] libmachine: STDOUT: 
	I1205 11:01:22.193647   10746 main.go:141] libmachine: STDERR: 
	I1205 11:01:22.193717   10746 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/test-preload-056000/disk.qcow2 +20000M
	I1205 11:01:22.202952   10746 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:01:22.202977   10746 main.go:141] libmachine: STDERR: 
	I1205 11:01:22.202994   10746 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/test-preload-056000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/test-preload-056000/disk.qcow2
	I1205 11:01:22.203002   10746 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:01:22.203016   10746 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:01:22.203064   10746 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/test-preload-056000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/test-preload-056000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/test-preload-056000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:ec:5b:12:dd:74 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/test-preload-056000/disk.qcow2
	I1205 11:01:22.205730   10746 main.go:141] libmachine: STDOUT: 
	I1205 11:01:22.205750   10746 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:01:22.205769   10746 client.go:171] duration metric: took 230.660041ms to LocalClient.Create
	I1205 11:01:22.458729   10746 cache.go:162] opening:  /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1205 11:01:22.468405   10746 cache.go:162] opening:  /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	W1205 11:01:22.476753   10746 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1205 11:01:22.476772   10746 cache.go:162] opening:  /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1205 11:01:22.515268   10746 cache.go:162] opening:  /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I1205 11:01:22.597882   10746 cache.go:162] opening:  /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I1205 11:01:22.640570   10746 cache.go:162] opening:  /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1205 11:01:22.714644   10746 cache.go:162] opening:  /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1205 11:01:22.851622   10746 cache.go:157] /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I1205 11:01:22.851682   10746 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 899.12075ms
	I1205 11:01:22.851724   10746 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W1205 11:01:23.307717   10746 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1205 11:01:23.307839   10746 cache.go:162] opening:  /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 11:01:23.785979   10746 cache.go:157] /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 11:01:23.786032   10746 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.8338165s
	I1205 11:01:23.786063   10746 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 11:01:24.205967   10746 start.go:128] duration metric: took 2.252927833s to createHost
	I1205 11:01:24.206022   10746 start.go:83] releasing machines lock for "test-preload-056000", held for 2.253042208s
	W1205 11:01:24.206102   10746 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:01:24.224409   10746 out.go:177] * Deleting "test-preload-056000" in qemu2 ...
	W1205 11:01:24.257702   10746 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:01:24.257734   10746 start.go:729] Will try again in 5 seconds ...
	I1205 11:01:24.381490   10746 cache.go:157] /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I1205 11:01:24.381535   10746 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.42907775s
	I1205 11:01:24.381563   10746 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I1205 11:01:24.918058   10746 cache.go:157] /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I1205 11:01:24.918074   10746 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.96587225s
	I1205 11:01:24.918083   10746 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I1205 11:01:26.819296   10746 cache.go:157] /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I1205 11:01:26.819344   10746 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.867160208s
	I1205 11:01:26.819368   10746 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I1205 11:01:27.385669   10746 cache.go:157] /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I1205 11:01:27.385722   10746 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.433506333s
	I1205 11:01:27.385746   10746 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I1205 11:01:29.258165   10746 start.go:360] acquireMachinesLock for test-preload-056000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:01:29.258755   10746 start.go:364] duration metric: took 507.625µs to acquireMachinesLock for "test-preload-056000"
	I1205 11:01:29.258883   10746 start.go:93] Provisioning new machine with config: &{Name:test-preload-056000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-056000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:01:29.259121   10746 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:01:29.270836   10746 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 11:01:29.318823   10746 start.go:159] libmachine.API.Create for "test-preload-056000" (driver="qemu2")
	I1205 11:01:29.318890   10746 client.go:168] LocalClient.Create starting
	I1205 11:01:29.319034   10746 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 11:01:29.319124   10746 main.go:141] libmachine: Decoding PEM data...
	I1205 11:01:29.319149   10746 main.go:141] libmachine: Parsing certificate...
	I1205 11:01:29.319210   10746 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 11:01:29.319273   10746 main.go:141] libmachine: Decoding PEM data...
	I1205 11:01:29.319286   10746 main.go:141] libmachine: Parsing certificate...
	I1205 11:01:29.319847   10746 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:01:29.496679   10746 main.go:141] libmachine: Creating SSH key...
	I1205 11:01:29.642044   10746 main.go:141] libmachine: Creating Disk image...
	I1205 11:01:29.642055   10746 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:01:29.642272   10746 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/test-preload-056000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/test-preload-056000/disk.qcow2
	I1205 11:01:29.652554   10746 main.go:141] libmachine: STDOUT: 
	I1205 11:01:29.652575   10746 main.go:141] libmachine: STDERR: 
	I1205 11:01:29.652646   10746 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/test-preload-056000/disk.qcow2 +20000M
	I1205 11:01:29.661338   10746 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:01:29.661354   10746 main.go:141] libmachine: STDERR: 
	I1205 11:01:29.661370   10746 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/test-preload-056000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/test-preload-056000/disk.qcow2
	I1205 11:01:29.661377   10746 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:01:29.661389   10746 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:01:29.661429   10746 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/test-preload-056000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/test-preload-056000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/test-preload-056000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:b5:7b:0d:91:45 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/test-preload-056000/disk.qcow2
	I1205 11:01:29.663285   10746 main.go:141] libmachine: STDOUT: 
	I1205 11:01:29.663310   10746 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:01:29.663325   10746 client.go:171] duration metric: took 344.434209ms to LocalClient.Create
	I1205 11:01:30.129514   10746 cache.go:157] /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I1205 11:01:30.129604   10746 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 8.177421708s
	I1205 11:01:30.129632   10746 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I1205 11:01:31.663549   10746 start.go:128] duration metric: took 2.404415125s to createHost
	I1205 11:01:31.663598   10746 start.go:83] releasing machines lock for "test-preload-056000", held for 2.404841334s
	W1205 11:01:31.663929   10746 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-056000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-056000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:01:31.677077   10746 out.go:201] 
	W1205 11:01:31.680698   10746 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:01:31.680750   10746 out.go:270] * 
	* 
	W1205 11:01:31.683662   10746 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:01:31.692575   10746 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-056000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-12-05 11:01:31.712271 -0800 PST m=+696.405740918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-056000 -n test-preload-056000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-056000 -n test-preload-056000: exit status 7 (71.626708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-056000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-056000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-056000
--- FAIL: TestPreload (10.05s)

                                                
                                    
x
+
TestScheduledStopUnix (10.14s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-127000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-127000 --memory=2048 --driver=qemu2 : exit status 80 (9.978766292s)

                                                
                                                
-- stdout --
	* [scheduled-stop-127000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-127000" primary control-plane node in "scheduled-stop-127000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-127000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-127000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-127000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-127000" primary control-plane node in "scheduled-stop-127000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-127000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-127000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-12-05 11:01:41.849553 -0800 PST m=+706.543130085
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-127000 -n scheduled-stop-127000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-127000 -n scheduled-stop-127000: exit status 7 (73.846834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-127000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-127000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-127000
--- FAIL: TestScheduledStopUnix (10.14s)

                                                
                                    
x
+
TestSkaffold (12.25s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3755839597 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3755839597 version: (1.019963667s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-443000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-443000 --memory=2600 --driver=qemu2 : exit status 80 (9.7731365s)

                                                
                                                
-- stdout --
	* [skaffold-443000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-443000" primary control-plane node in "skaffold-443000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-443000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-443000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-443000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-443000" primary control-plane node in "skaffold-443000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-443000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-443000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-12-05 11:01:54.10264 -0800 PST m=+718.796346626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-443000 -n skaffold-443000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-443000 -n skaffold-443000: exit status 7 (68.694333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-443000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-443000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-443000
--- FAIL: TestSkaffold (12.25s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (601.03s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2919436991 start -p running-upgrade-829000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2919436991 start -p running-upgrade-829000 --memory=2200 --vm-driver=qemu2 : (53.563799792s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-829000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-829000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m33.913350417s)

                                                
                                                
-- stdout --
	* [running-upgrade-829000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-829000" primary control-plane node in "running-upgrade-829000" cluster
	* Updating the running qemu2 "running-upgrade-829000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:03:29.573702   11137 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:03:29.574053   11137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:03:29.574056   11137 out.go:358] Setting ErrFile to fd 2...
	I1205 11:03:29.574059   11137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:03:29.574182   11137 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:03:29.575258   11137 out.go:352] Setting JSON to false
	I1205 11:03:29.594678   11137 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5581,"bootTime":1733419828,"procs":548,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 11:03:29.594749   11137 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:03:29.598561   11137 out.go:177] * [running-upgrade-829000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:03:29.606594   11137 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 11:03:29.606638   11137 notify.go:220] Checking for updates...
	I1205 11:03:29.617529   11137 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 11:03:29.621512   11137 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:03:29.624546   11137 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:03:29.627547   11137 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	I1205 11:03:29.634499   11137 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:03:29.637853   11137 config.go:182] Loaded profile config "running-upgrade-829000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1205 11:03:29.641495   11137 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1205 11:03:29.644519   11137 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:03:29.647508   11137 out.go:177] * Using the qemu2 driver based on existing profile
	I1205 11:03:29.654542   11137 start.go:297] selected driver: qemu2
	I1205 11:03:29.654549   11137 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-829000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51775 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-829000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1205 11:03:29.654604   11137 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:03:29.657526   11137 cni.go:84] Creating CNI manager for ""
	I1205 11:03:29.657556   11137 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:03:29.657583   11137 start.go:340] cluster config:
	{Name:running-upgrade-829000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51775 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-829000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1205 11:03:29.657635   11137 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:03:29.666506   11137 out.go:177] * Starting "running-upgrade-829000" primary control-plane node in "running-upgrade-829000" cluster
	I1205 11:03:29.670334   11137 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1205 11:03:29.670348   11137 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1205 11:03:29.670357   11137 cache.go:56] Caching tarball of preloaded images
	I1205 11:03:29.670412   11137 preload.go:172] Found /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:03:29.670417   11137 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1205 11:03:29.670463   11137 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/running-upgrade-829000/config.json ...
	I1205 11:03:29.670854   11137 start.go:360] acquireMachinesLock for running-upgrade-829000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:03:29.670887   11137 start.go:364] duration metric: took 26.5µs to acquireMachinesLock for "running-upgrade-829000"
	I1205 11:03:29.670896   11137 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:03:29.670902   11137 fix.go:54] fixHost starting: 
	I1205 11:03:29.671488   11137 fix.go:112] recreateIfNeeded on running-upgrade-829000: state=Running err=<nil>
	W1205 11:03:29.671497   11137 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:03:29.675511   11137 out.go:177] * Updating the running qemu2 "running-upgrade-829000" VM ...
	I1205 11:03:29.682533   11137 machine.go:93] provisionDockerMachine start ...
	I1205 11:03:29.682618   11137 main.go:141] libmachine: Using SSH client type: native
	I1205 11:03:29.682791   11137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10073afc0] 0x10073d800 <nil>  [] 0s} localhost 51743 <nil> <nil>}
	I1205 11:03:29.682796   11137 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 11:03:29.740325   11137 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-829000
	
	I1205 11:03:29.740340   11137 buildroot.go:166] provisioning hostname "running-upgrade-829000"
	I1205 11:03:29.740415   11137 main.go:141] libmachine: Using SSH client type: native
	I1205 11:03:29.740529   11137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10073afc0] 0x10073d800 <nil>  [] 0s} localhost 51743 <nil> <nil>}
	I1205 11:03:29.740538   11137 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-829000 && echo "running-upgrade-829000" | sudo tee /etc/hostname
	I1205 11:03:29.798554   11137 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-829000
	
	I1205 11:03:29.798619   11137 main.go:141] libmachine: Using SSH client type: native
	I1205 11:03:29.798725   11137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10073afc0] 0x10073d800 <nil>  [] 0s} localhost 51743 <nil> <nil>}
	I1205 11:03:29.798733   11137 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-829000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-829000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-829000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 11:03:29.858899   11137 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 11:03:29.858912   11137 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20052-8600/.minikube CaCertPath:/Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20052-8600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20052-8600/.minikube}
	I1205 11:03:29.858921   11137 buildroot.go:174] setting up certificates
	I1205 11:03:29.858926   11137 provision.go:84] configureAuth start
	I1205 11:03:29.858931   11137 provision.go:143] copyHostCerts
	I1205 11:03:29.858999   11137 exec_runner.go:144] found /Users/jenkins/minikube-integration/20052-8600/.minikube/ca.pem, removing ...
	I1205 11:03:29.859019   11137 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20052-8600/.minikube/ca.pem
	I1205 11:03:29.859167   11137 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20052-8600/.minikube/ca.pem (1082 bytes)
	I1205 11:03:29.859341   11137 exec_runner.go:144] found /Users/jenkins/minikube-integration/20052-8600/.minikube/cert.pem, removing ...
	I1205 11:03:29.859344   11137 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20052-8600/.minikube/cert.pem
	I1205 11:03:29.859385   11137 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20052-8600/.minikube/cert.pem (1123 bytes)
	I1205 11:03:29.859500   11137 exec_runner.go:144] found /Users/jenkins/minikube-integration/20052-8600/.minikube/key.pem, removing ...
	I1205 11:03:29.859503   11137 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20052-8600/.minikube/key.pem
	I1205 11:03:29.859543   11137 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20052-8600/.minikube/key.pem (1679 bytes)
	I1205 11:03:29.859640   11137 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-829000 san=[127.0.0.1 localhost minikube running-upgrade-829000]
	I1205 11:03:29.993347   11137 provision.go:177] copyRemoteCerts
	I1205 11:03:29.993406   11137 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 11:03:29.993414   11137 sshutil.go:53] new ssh client: &{IP:localhost Port:51743 SSHKeyPath:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/running-upgrade-829000/id_rsa Username:docker}
	I1205 11:03:30.024187   11137 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 11:03:30.033153   11137 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1205 11:03:30.039885   11137 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 11:03:30.046306   11137 provision.go:87] duration metric: took 187.377208ms to configureAuth
	I1205 11:03:30.046316   11137 buildroot.go:189] setting minikube options for container-runtime
	I1205 11:03:30.046422   11137 config.go:182] Loaded profile config "running-upgrade-829000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1205 11:03:30.046465   11137 main.go:141] libmachine: Using SSH client type: native
	I1205 11:03:30.046557   11137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10073afc0] 0x10073d800 <nil>  [] 0s} localhost 51743 <nil> <nil>}
	I1205 11:03:30.046562   11137 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1205 11:03:30.101837   11137 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1205 11:03:30.101850   11137 buildroot.go:70] root file system type: tmpfs
	I1205 11:03:30.101902   11137 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1205 11:03:30.101974   11137 main.go:141] libmachine: Using SSH client type: native
	I1205 11:03:30.102080   11137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10073afc0] 0x10073d800 <nil>  [] 0s} localhost 51743 <nil> <nil>}
	I1205 11:03:30.102114   11137 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1205 11:03:30.163974   11137 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1205 11:03:30.164039   11137 main.go:141] libmachine: Using SSH client type: native
	I1205 11:03:30.164153   11137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10073afc0] 0x10073d800 <nil>  [] 0s} localhost 51743 <nil> <nil>}
	I1205 11:03:30.164163   11137 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1205 11:03:30.223500   11137 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 11:03:30.223512   11137 machine.go:96] duration metric: took 540.978125ms to provisionDockerMachine
	I1205 11:03:30.223518   11137 start.go:293] postStartSetup for "running-upgrade-829000" (driver="qemu2")
	I1205 11:03:30.223524   11137 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 11:03:30.223592   11137 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 11:03:30.223601   11137 sshutil.go:53] new ssh client: &{IP:localhost Port:51743 SSHKeyPath:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/running-upgrade-829000/id_rsa Username:docker}
	I1205 11:03:30.254806   11137 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 11:03:30.256026   11137 info.go:137] Remote host: Buildroot 2021.02.12
	I1205 11:03:30.256035   11137 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20052-8600/.minikube/addons for local assets ...
	I1205 11:03:30.256107   11137 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20052-8600/.minikube/files for local assets ...
	I1205 11:03:30.256201   11137 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20052-8600/.minikube/files/etc/ssl/certs/91362.pem -> 91362.pem in /etc/ssl/certs
	I1205 11:03:30.256302   11137 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 11:03:30.259078   11137 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/files/etc/ssl/certs/91362.pem --> /etc/ssl/certs/91362.pem (1708 bytes)
	I1205 11:03:30.266266   11137 start.go:296] duration metric: took 42.743083ms for postStartSetup
	I1205 11:03:30.266280   11137 fix.go:56] duration metric: took 595.385875ms for fixHost
	I1205 11:03:30.266324   11137 main.go:141] libmachine: Using SSH client type: native
	I1205 11:03:30.266426   11137 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10073afc0] 0x10073d800 <nil>  [] 0s} localhost 51743 <nil> <nil>}
	I1205 11:03:30.266432   11137 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 11:03:30.322397   11137 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733425410.478704972
	
	I1205 11:03:30.322407   11137 fix.go:216] guest clock: 1733425410.478704972
	I1205 11:03:30.322410   11137 fix.go:229] Guest: 2024-12-05 11:03:30.478704972 -0800 PST Remote: 2024-12-05 11:03:30.266281 -0800 PST m=+0.714905376 (delta=212.423972ms)
	I1205 11:03:30.322421   11137 fix.go:200] guest clock delta is within tolerance: 212.423972ms
	I1205 11:03:30.322423   11137 start.go:83] releasing machines lock for "running-upgrade-829000", held for 651.538166ms
	I1205 11:03:30.322503   11137 ssh_runner.go:195] Run: cat /version.json
	I1205 11:03:30.322513   11137 sshutil.go:53] new ssh client: &{IP:localhost Port:51743 SSHKeyPath:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/running-upgrade-829000/id_rsa Username:docker}
	I1205 11:03:30.322503   11137 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 11:03:30.322569   11137 sshutil.go:53] new ssh client: &{IP:localhost Port:51743 SSHKeyPath:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/running-upgrade-829000/id_rsa Username:docker}
	W1205 11:03:30.323035   11137 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:51881->127.0.0.1:51743: read: connection reset by peer
	I1205 11:03:30.323053   11137 retry.go:31] will retry after 349.988371ms: ssh: handshake failed: read tcp 127.0.0.1:51881->127.0.0.1:51743: read: connection reset by peer
	W1205 11:03:30.351518   11137 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1205 11:03:30.351581   11137 ssh_runner.go:195] Run: systemctl --version
	I1205 11:03:30.353468   11137 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 11:03:30.355215   11137 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 11:03:30.355252   11137 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1205 11:03:30.358535   11137 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1205 11:03:30.363098   11137 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 11:03:30.363106   11137 start.go:495] detecting cgroup driver to use...
	I1205 11:03:30.363205   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 11:03:30.368520   11137 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1205 11:03:30.371306   11137 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 11:03:30.374672   11137 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 11:03:30.374705   11137 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 11:03:30.377600   11137 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 11:03:30.380642   11137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 11:03:30.383594   11137 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 11:03:30.386702   11137 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 11:03:30.389992   11137 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 11:03:30.393265   11137 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 11:03:30.396084   11137 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 11:03:30.398952   11137 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 11:03:30.401847   11137 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 11:03:30.404374   11137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 11:03:30.494159   11137 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 11:03:30.506769   11137 start.go:495] detecting cgroup driver to use...
	I1205 11:03:30.506850   11137 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1205 11:03:30.513055   11137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 11:03:30.518879   11137 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 11:03:30.525939   11137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 11:03:30.531343   11137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 11:03:30.537035   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 11:03:30.541962   11137 ssh_runner.go:195] Run: which cri-dockerd
	I1205 11:03:30.543399   11137 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1205 11:03:30.546568   11137 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1205 11:03:30.552064   11137 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1205 11:03:30.646946   11137 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1205 11:03:30.738542   11137 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1205 11:03:30.738606   11137 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1205 11:03:30.744293   11137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 11:03:30.843482   11137 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1205 11:03:43.616078   11137 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.772714875s)
	I1205 11:03:43.616161   11137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1205 11:03:43.621859   11137 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1205 11:03:43.630531   11137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 11:03:43.637154   11137 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1205 11:03:43.722450   11137 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1205 11:03:43.811678   11137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 11:03:43.888343   11137 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1205 11:03:43.894753   11137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 11:03:43.900224   11137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 11:03:43.982325   11137 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1205 11:03:44.027029   11137 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1205 11:03:44.027133   11137 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1205 11:03:44.030175   11137 start.go:563] Will wait 60s for crictl version
	I1205 11:03:44.030240   11137 ssh_runner.go:195] Run: which crictl
	I1205 11:03:44.031795   11137 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 11:03:44.043551   11137 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1205 11:03:44.043641   11137 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 11:03:44.056328   11137 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 11:03:44.076487   11137 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1205 11:03:44.076659   11137 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1205 11:03:44.078016   11137 kubeadm.go:883] updating cluster {Name:running-upgrade-829000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51775 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-829000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1205 11:03:44.078059   11137 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1205 11:03:44.078117   11137 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1205 11:03:44.088849   11137 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1205 11:03:44.088859   11137 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1205 11:03:44.088918   11137 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1205 11:03:44.092067   11137 ssh_runner.go:195] Run: which lz4
	I1205 11:03:44.093323   11137 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 11:03:44.094513   11137 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 11:03:44.094524   11137 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1205 11:03:45.034528   11137 docker.go:653] duration metric: took 941.254208ms to copy over tarball
	I1205 11:03:45.034609   11137 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 11:03:46.255499   11137 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.220889458s)
	I1205 11:03:46.255513   11137 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 11:03:46.272186   11137 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1205 11:03:46.275705   11137 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1205 11:03:46.281137   11137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 11:03:46.365586   11137 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1205 11:03:47.575525   11137 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.209933958s)
	I1205 11:03:47.575635   11137 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1205 11:03:47.586488   11137 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1205 11:03:47.586496   11137 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1205 11:03:47.586502   11137 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 11:03:47.592315   11137 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 11:03:47.594025   11137 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1205 11:03:47.595714   11137 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1205 11:03:47.595838   11137 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 11:03:47.597291   11137 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1205 11:03:47.597326   11137 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1205 11:03:47.598925   11137 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1205 11:03:47.599271   11137 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1205 11:03:47.600152   11137 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1205 11:03:47.600165   11137 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1205 11:03:47.601442   11137 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1205 11:03:47.601720   11137 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1205 11:03:47.602409   11137 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1205 11:03:47.602496   11137 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1205 11:03:47.603623   11137 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1205 11:03:47.604201   11137 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1205 11:03:48.139958   11137 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1205 11:03:48.152231   11137 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1205 11:03:48.152296   11137 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1205 11:03:48.152484   11137 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1205 11:03:48.163480   11137 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1205 11:03:48.181146   11137 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1205 11:03:48.192725   11137 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1205 11:03:48.192755   11137 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1205 11:03:48.192823   11137 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1205 11:03:48.203988   11137 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1205 11:03:48.239156   11137 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1205 11:03:48.253592   11137 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1205 11:03:48.253620   11137 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1205 11:03:48.253670   11137 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1205 11:03:48.265120   11137 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1205 11:03:48.275737   11137 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1205 11:03:48.285437   11137 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1205 11:03:48.285461   11137 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1205 11:03:48.285516   11137 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1205 11:03:48.295413   11137 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1205 11:03:48.347171   11137 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1205 11:03:48.359029   11137 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1205 11:03:48.359050   11137 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1205 11:03:48.359134   11137 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1205 11:03:48.369721   11137 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1205 11:03:48.376358   11137 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1205 11:03:48.386033   11137 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1205 11:03:48.386056   11137 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1205 11:03:48.386117   11137 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1205 11:03:48.395857   11137 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1205 11:03:48.395999   11137 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1205 11:03:48.397821   11137 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1205 11:03:48.397832   11137 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1205 11:03:48.405796   11137 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1205 11:03:48.405805   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1205 11:03:48.438791   11137 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W1205 11:03:48.451210   11137 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1205 11:03:48.451361   11137 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1205 11:03:48.461117   11137 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1205 11:03:48.461142   11137 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1205 11:03:48.461205   11137 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1205 11:03:48.470998   11137 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1205 11:03:48.471140   11137 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1205 11:03:48.472661   11137 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1205 11:03:48.472672   11137 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	W1205 11:03:48.498456   11137 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1205 11:03:48.498591   11137 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 11:03:48.514665   11137 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1205 11:03:48.514679   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1205 11:03:48.521678   11137 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1205 11:03:48.521709   11137 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 11:03:48.521780   11137 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 11:03:48.570901   11137 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1205 11:03:49.117834   11137 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 11:03:49.118390   11137 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1205 11:03:49.124331   11137 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1205 11:03:49.124391   11137 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1205 11:03:49.180363   11137 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 11:03:49.180377   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1205 11:03:49.409309   11137 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1205 11:03:49.409353   11137 cache_images.go:92] duration metric: took 1.822862625s to LoadCachedImages
	W1205 11:03:49.409396   11137 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I1205 11:03:49.409402   11137 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1205 11:03:49.409466   11137 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-829000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-829000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 11:03:49.409549   11137 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1205 11:03:49.423038   11137 cni.go:84] Creating CNI manager for ""
	I1205 11:03:49.423054   11137 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:03:49.423074   11137 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 11:03:49.423086   11137 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-829000 NodeName:running-upgrade-829000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 11:03:49.423170   11137 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-829000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 11:03:49.423242   11137 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1205 11:03:49.426432   11137 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 11:03:49.426472   11137 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 11:03:49.429742   11137 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1205 11:03:49.434937   11137 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 11:03:49.440100   11137 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1205 11:03:49.445185   11137 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1205 11:03:49.446436   11137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 11:03:49.527042   11137 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 11:03:49.533006   11137 certs.go:68] Setting up /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/running-upgrade-829000 for IP: 10.0.2.15
	I1205 11:03:49.533013   11137 certs.go:194] generating shared ca certs ...
	I1205 11:03:49.533021   11137 certs.go:226] acquiring lock for ca certs: {Name:mk120c2a781c4636bd95493f524c24b1dcf3780a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:03:49.533280   11137 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20052-8600/.minikube/ca.key
	I1205 11:03:49.533317   11137 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20052-8600/.minikube/proxy-client-ca.key
	I1205 11:03:49.533323   11137 certs.go:256] generating profile certs ...
	I1205 11:03:49.533392   11137 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/running-upgrade-829000/client.key
	I1205 11:03:49.533406   11137 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/running-upgrade-829000/apiserver.key.4b048635
	I1205 11:03:49.533415   11137 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/running-upgrade-829000/apiserver.crt.4b048635 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1205 11:03:49.642016   11137 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/running-upgrade-829000/apiserver.crt.4b048635 ...
	I1205 11:03:49.642028   11137 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/running-upgrade-829000/apiserver.crt.4b048635: {Name:mk88beef2fc5bc0605e096bbdf419dff205b2f23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:03:49.642313   11137 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/running-upgrade-829000/apiserver.key.4b048635 ...
	I1205 11:03:49.642318   11137 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/running-upgrade-829000/apiserver.key.4b048635: {Name:mk57495bebc12e7f160f15e5c5ccade8d384f432 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:03:49.642479   11137 certs.go:381] copying /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/running-upgrade-829000/apiserver.crt.4b048635 -> /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/running-upgrade-829000/apiserver.crt
	I1205 11:03:49.642597   11137 certs.go:385] copying /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/running-upgrade-829000/apiserver.key.4b048635 -> /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/running-upgrade-829000/apiserver.key
	I1205 11:03:49.642729   11137 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/running-upgrade-829000/proxy-client.key
	I1205 11:03:49.642867   11137 certs.go:484] found cert: /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/9136.pem (1338 bytes)
	W1205 11:03:49.642889   11137 certs.go:480] ignoring /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/9136_empty.pem, impossibly tiny 0 bytes
	I1205 11:03:49.642894   11137 certs.go:484] found cert: /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 11:03:49.642915   11137 certs.go:484] found cert: /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem (1082 bytes)
	I1205 11:03:49.642938   11137 certs.go:484] found cert: /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem (1123 bytes)
	I1205 11:03:49.642958   11137 certs.go:484] found cert: /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/key.pem (1679 bytes)
	I1205 11:03:49.643004   11137 certs.go:484] found cert: /Users/jenkins/minikube-integration/20052-8600/.minikube/files/etc/ssl/certs/91362.pem (1708 bytes)
	I1205 11:03:49.643467   11137 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 11:03:49.650832   11137 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 11:03:49.657771   11137 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 11:03:49.665549   11137 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 11:03:49.672865   11137 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/running-upgrade-829000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1205 11:03:49.679309   11137 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/running-upgrade-829000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 11:03:49.686301   11137 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/running-upgrade-829000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 11:03:49.694225   11137 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/running-upgrade-829000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 11:03:49.701945   11137 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/9136.pem --> /usr/share/ca-certificates/9136.pem (1338 bytes)
	I1205 11:03:49.709521   11137 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/files/etc/ssl/certs/91362.pem --> /usr/share/ca-certificates/91362.pem (1708 bytes)
	I1205 11:03:49.716787   11137 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 11:03:49.723492   11137 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 11:03:49.728490   11137 ssh_runner.go:195] Run: openssl version
	I1205 11:03:49.730612   11137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 11:03:49.734276   11137 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 11:03:49.736052   11137 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 11:03:49.736079   11137 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 11:03:49.737991   11137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 11:03:49.741048   11137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9136.pem && ln -fs /usr/share/ca-certificates/9136.pem /etc/ssl/certs/9136.pem"
	I1205 11:03:49.744190   11137 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9136.pem
	I1205 11:03:49.745691   11137 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 18:50 /usr/share/ca-certificates/9136.pem
	I1205 11:03:49.745722   11137 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9136.pem
	I1205 11:03:49.747722   11137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9136.pem /etc/ssl/certs/51391683.0"
	I1205 11:03:49.750595   11137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/91362.pem && ln -fs /usr/share/ca-certificates/91362.pem /etc/ssl/certs/91362.pem"
	I1205 11:03:49.754210   11137 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91362.pem
	I1205 11:03:49.755806   11137 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 18:50 /usr/share/ca-certificates/91362.pem
	I1205 11:03:49.755831   11137 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91362.pem
	I1205 11:03:49.757659   11137 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/91362.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 11:03:49.760605   11137 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 11:03:49.762337   11137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 11:03:49.764385   11137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 11:03:49.766122   11137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 11:03:49.768145   11137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 11:03:49.770109   11137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 11:03:49.771994   11137 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 11:03:49.773767   11137 kubeadm.go:392] StartCluster: {Name:running-upgrade-829000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51775 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-829000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1205 11:03:49.773841   11137 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1205 11:03:49.784564   11137 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 11:03:49.787678   11137 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 11:03:49.787688   11137 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 11:03:49.787721   11137 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 11:03:49.790705   11137 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 11:03:49.790745   11137 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-829000" does not appear in /Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 11:03:49.790760   11137 kubeconfig.go:62] /Users/jenkins/minikube-integration/20052-8600/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-829000" cluster setting kubeconfig missing "running-upgrade-829000" context setting]
	I1205 11:03:49.790923   11137 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20052-8600/kubeconfig: {Name:mkb6577356fc2312bf9b329fd967969d2d30b8a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:03:49.791889   11137 kapi.go:59] client config for running-upgrade-829000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/running-upgrade-829000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/running-upgrade-829000/client.key", CAFile:"/Users/jenkins/minikube-integration/20052-8600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102197740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 11:03:49.792825   11137 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 11:03:49.795601   11137 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-829000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1205 11:03:49.795607   11137 kubeadm.go:1160] stopping kube-system containers ...
	I1205 11:03:49.795654   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1205 11:03:49.806771   11137 docker.go:483] Stopping containers: [18af6158f731 e4bc69e4ee11 3700271b73a5 da5af68cad9f b4ab67c9a319 d83e8d46af5a a63b36799185 eb02d4cd01b5 fb5ba8ab7ba0 6e40c464d81d d8e0a4f18954 a511e12be640 5a0ca8452896 d46c7ac1ea28]
	I1205 11:03:49.806840   11137 ssh_runner.go:195] Run: docker stop 18af6158f731 e4bc69e4ee11 3700271b73a5 da5af68cad9f b4ab67c9a319 d83e8d46af5a a63b36799185 eb02d4cd01b5 fb5ba8ab7ba0 6e40c464d81d d8e0a4f18954 a511e12be640 5a0ca8452896 d46c7ac1ea28
	I1205 11:03:49.818255   11137 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 11:03:49.900075   11137 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 11:03:49.904027   11137 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Dec  5 19:03 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Dec  5 19:03 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Dec  5 19:03 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Dec  5 19:03 /etc/kubernetes/scheduler.conf
	
	I1205 11:03:49.904070   11137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51775 /etc/kubernetes/admin.conf
	I1205 11:03:49.907384   11137 kubeadm.go:163] "https://control-plane.minikube.internal:51775" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51775 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 11:03:49.907424   11137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 11:03:49.910725   11137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51775 /etc/kubernetes/kubelet.conf
	I1205 11:03:49.914186   11137 kubeadm.go:163] "https://control-plane.minikube.internal:51775" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51775 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 11:03:49.914222   11137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 11:03:49.917406   11137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51775 /etc/kubernetes/controller-manager.conf
	I1205 11:03:49.920061   11137 kubeadm.go:163] "https://control-plane.minikube.internal:51775" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51775 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 11:03:49.920090   11137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 11:03:49.922705   11137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51775 /etc/kubernetes/scheduler.conf
	I1205 11:03:49.925733   11137 kubeadm.go:163] "https://control-plane.minikube.internal:51775" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51775 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1205 11:03:49.925766   11137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 11:03:49.928841   11137 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 11:03:49.932303   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 11:03:49.988422   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 11:03:50.559375   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 11:03:50.776943   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 11:03:50.798060   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 11:03:50.821802   11137 api_server.go:52] waiting for apiserver process to appear ...
	I1205 11:03:50.821902   11137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 11:03:51.324072   11137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 11:03:51.823982   11137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 11:03:51.828385   11137 api_server.go:72] duration metric: took 1.0065955s to wait for apiserver process to appear ...
	I1205 11:03:51.828399   11137 api_server.go:88] waiting for apiserver healthz status ...
	I1205 11:03:51.828430   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:03:56.830565   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:03:56.830659   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:04:01.831540   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:04:01.831642   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:04:06.833368   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:04:06.833471   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:04:11.835011   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:04:11.835109   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:04:16.837245   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:04:16.837348   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:04:21.840014   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:04:21.840107   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:04:26.842853   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:04:26.842943   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:04:31.845662   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:04:31.845740   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:04:36.848369   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:04:36.848459   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:04:41.851251   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:04:41.851379   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:04:46.853440   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:04:46.853535   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:04:51.856166   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:04:51.856475   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:04:51.878101   11137 logs.go:282] 2 containers: [262c3ed215cb b4ab67c9a319]
	I1205 11:04:51.878225   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:04:51.892311   11137 logs.go:282] 2 containers: [958667a8aed1 d83e8d46af5a]
	I1205 11:04:51.892399   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:04:51.907972   11137 logs.go:282] 1 containers: [8a5ec0469f49]
	I1205 11:04:51.908051   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:04:51.918452   11137 logs.go:282] 2 containers: [40c6e6634eaf fb5ba8ab7ba0]
	I1205 11:04:51.918543   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:04:51.929419   11137 logs.go:282] 1 containers: [62422ea4292b]
	I1205 11:04:51.929499   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:04:51.939616   11137 logs.go:282] 2 containers: [ee1f0118663e 6e40c464d81d]
	I1205 11:04:51.939693   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:04:51.949799   11137 logs.go:282] 0 containers: []
	W1205 11:04:51.949814   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:04:51.949885   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:04:51.959890   11137 logs.go:282] 2 containers: [e5971fc04e44 904e682b96d7]
	I1205 11:04:51.959912   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:04:51.959918   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:04:52.030086   11137 logs.go:123] Gathering logs for kube-apiserver [262c3ed215cb] ...
	I1205 11:04:52.030096   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262c3ed215cb"
	I1205 11:04:52.044003   11137 logs.go:123] Gathering logs for kube-controller-manager [ee1f0118663e] ...
	I1205 11:04:52.044019   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee1f0118663e"
	I1205 11:04:52.068415   11137 logs.go:123] Gathering logs for storage-provisioner [e5971fc04e44] ...
	I1205 11:04:52.068426   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5971fc04e44"
	I1205 11:04:52.079992   11137 logs.go:123] Gathering logs for storage-provisioner [904e682b96d7] ...
	I1205 11:04:52.080006   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904e682b96d7"
	I1205 11:04:52.091210   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:04:52.091229   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:04:52.131778   11137 logs.go:123] Gathering logs for kube-controller-manager [6e40c464d81d] ...
	I1205 11:04:52.131787   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e40c464d81d"
	I1205 11:04:52.150620   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:04:52.150630   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:04:52.162957   11137 logs.go:123] Gathering logs for kube-scheduler [fb5ba8ab7ba0] ...
	I1205 11:04:52.162970   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb5ba8ab7ba0"
	I1205 11:04:52.178467   11137 logs.go:123] Gathering logs for coredns [8a5ec0469f49] ...
	I1205 11:04:52.178479   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a5ec0469f49"
	I1205 11:04:52.190032   11137 logs.go:123] Gathering logs for etcd [d83e8d46af5a] ...
	I1205 11:04:52.190044   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d83e8d46af5a"
	I1205 11:04:52.205374   11137 logs.go:123] Gathering logs for kube-apiserver [b4ab67c9a319] ...
	I1205 11:04:52.205385   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4ab67c9a319"
	I1205 11:04:52.235345   11137 logs.go:123] Gathering logs for etcd [958667a8aed1] ...
	I1205 11:04:52.235355   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 958667a8aed1"
	I1205 11:04:52.249577   11137 logs.go:123] Gathering logs for kube-scheduler [40c6e6634eaf] ...
	I1205 11:04:52.249589   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c6e6634eaf"
	I1205 11:04:52.262406   11137 logs.go:123] Gathering logs for kube-proxy [62422ea4292b] ...
	I1205 11:04:52.262420   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62422ea4292b"
	I1205 11:04:52.273653   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:04:52.273663   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:04:52.300284   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:04:52.300296   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:04:54.806945   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:04:59.807345   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:04:59.807830   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:04:59.846093   11137 logs.go:282] 2 containers: [262c3ed215cb b4ab67c9a319]
	I1205 11:04:59.846257   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:04:59.867893   11137 logs.go:282] 2 containers: [958667a8aed1 d83e8d46af5a]
	I1205 11:04:59.868010   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:04:59.882916   11137 logs.go:282] 1 containers: [8a5ec0469f49]
	I1205 11:04:59.883006   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:04:59.895694   11137 logs.go:282] 2 containers: [40c6e6634eaf fb5ba8ab7ba0]
	I1205 11:04:59.895774   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:04:59.906357   11137 logs.go:282] 1 containers: [62422ea4292b]
	I1205 11:04:59.906432   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:04:59.919124   11137 logs.go:282] 2 containers: [ee1f0118663e 6e40c464d81d]
	I1205 11:04:59.919204   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:04:59.929245   11137 logs.go:282] 0 containers: []
	W1205 11:04:59.929256   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:04:59.929321   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:04:59.939408   11137 logs.go:282] 2 containers: [e5971fc04e44 904e682b96d7]
	I1205 11:04:59.939424   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:04:59.939431   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:04:59.981480   11137 logs.go:123] Gathering logs for kube-apiserver [b4ab67c9a319] ...
	I1205 11:04:59.981487   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4ab67c9a319"
	I1205 11:05:00.011556   11137 logs.go:123] Gathering logs for etcd [d83e8d46af5a] ...
	I1205 11:05:00.011566   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d83e8d46af5a"
	I1205 11:05:00.026130   11137 logs.go:123] Gathering logs for kube-scheduler [fb5ba8ab7ba0] ...
	I1205 11:05:00.026144   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb5ba8ab7ba0"
	I1205 11:05:00.041247   11137 logs.go:123] Gathering logs for kube-apiserver [262c3ed215cb] ...
	I1205 11:05:00.041260   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262c3ed215cb"
	I1205 11:05:00.055010   11137 logs.go:123] Gathering logs for etcd [958667a8aed1] ...
	I1205 11:05:00.055022   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 958667a8aed1"
	I1205 11:05:00.069259   11137 logs.go:123] Gathering logs for storage-provisioner [904e682b96d7] ...
	I1205 11:05:00.069269   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904e682b96d7"
	I1205 11:05:00.080552   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:05:00.080565   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:05:00.105765   11137 logs.go:123] Gathering logs for kube-controller-manager [6e40c464d81d] ...
	I1205 11:05:00.105775   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e40c464d81d"
	I1205 11:05:00.120672   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:05:00.120683   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:05:00.132191   11137 logs.go:123] Gathering logs for storage-provisioner [e5971fc04e44] ...
	I1205 11:05:00.132203   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5971fc04e44"
	I1205 11:05:00.143803   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:05:00.143815   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:05:00.148109   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:05:00.148116   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:05:00.184165   11137 logs.go:123] Gathering logs for coredns [8a5ec0469f49] ...
	I1205 11:05:00.184178   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a5ec0469f49"
	I1205 11:05:00.196034   11137 logs.go:123] Gathering logs for kube-scheduler [40c6e6634eaf] ...
	I1205 11:05:00.196050   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c6e6634eaf"
	I1205 11:05:00.209025   11137 logs.go:123] Gathering logs for kube-proxy [62422ea4292b] ...
	I1205 11:05:00.209037   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62422ea4292b"
	I1205 11:05:00.220927   11137 logs.go:123] Gathering logs for kube-controller-manager [ee1f0118663e] ...
	I1205 11:05:00.220941   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee1f0118663e"
	I1205 11:05:02.747785   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:05:07.750515   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:05:07.751223   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:05:07.794456   11137 logs.go:282] 2 containers: [262c3ed215cb b4ab67c9a319]
	I1205 11:05:07.794623   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:05:07.815500   11137 logs.go:282] 2 containers: [958667a8aed1 d83e8d46af5a]
	I1205 11:05:07.815635   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:05:07.832489   11137 logs.go:282] 1 containers: [8a5ec0469f49]
	I1205 11:05:07.832591   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:05:07.844956   11137 logs.go:282] 2 containers: [40c6e6634eaf fb5ba8ab7ba0]
	I1205 11:05:07.845026   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:05:07.855625   11137 logs.go:282] 1 containers: [62422ea4292b]
	I1205 11:05:07.855693   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:05:07.866449   11137 logs.go:282] 2 containers: [ee1f0118663e 6e40c464d81d]
	I1205 11:05:07.866529   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:05:07.876932   11137 logs.go:282] 0 containers: []
	W1205 11:05:07.876942   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:05:07.877001   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:05:07.887677   11137 logs.go:282] 2 containers: [e5971fc04e44 904e682b96d7]
	I1205 11:05:07.887705   11137 logs.go:123] Gathering logs for kube-apiserver [b4ab67c9a319] ...
	I1205 11:05:07.887711   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4ab67c9a319"
	I1205 11:05:07.919955   11137 logs.go:123] Gathering logs for kube-scheduler [40c6e6634eaf] ...
	I1205 11:05:07.919967   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c6e6634eaf"
	I1205 11:05:07.932169   11137 logs.go:123] Gathering logs for kube-proxy [62422ea4292b] ...
	I1205 11:05:07.932182   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62422ea4292b"
	I1205 11:05:07.943911   11137 logs.go:123] Gathering logs for storage-provisioner [904e682b96d7] ...
	I1205 11:05:07.943921   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904e682b96d7"
	I1205 11:05:07.955229   11137 logs.go:123] Gathering logs for kube-apiserver [262c3ed215cb] ...
	I1205 11:05:07.955243   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262c3ed215cb"
	I1205 11:05:07.969026   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:05:07.969037   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:05:07.995177   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:05:07.995185   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:05:08.007167   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:05:08.007178   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:05:08.011925   11137 logs.go:123] Gathering logs for coredns [8a5ec0469f49] ...
	I1205 11:05:08.011931   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a5ec0469f49"
	I1205 11:05:08.025979   11137 logs.go:123] Gathering logs for kube-scheduler [fb5ba8ab7ba0] ...
	I1205 11:05:08.025990   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb5ba8ab7ba0"
	I1205 11:05:08.040374   11137 logs.go:123] Gathering logs for kube-controller-manager [ee1f0118663e] ...
	I1205 11:05:08.040383   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee1f0118663e"
	I1205 11:05:08.057633   11137 logs.go:123] Gathering logs for storage-provisioner [e5971fc04e44] ...
	I1205 11:05:08.057643   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5971fc04e44"
	I1205 11:05:08.069058   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:05:08.069067   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:05:08.110615   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:05:08.110622   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:05:08.146137   11137 logs.go:123] Gathering logs for etcd [958667a8aed1] ...
	I1205 11:05:08.146148   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 958667a8aed1"
	I1205 11:05:08.159686   11137 logs.go:123] Gathering logs for etcd [d83e8d46af5a] ...
	I1205 11:05:08.159700   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d83e8d46af5a"
	I1205 11:05:08.180780   11137 logs.go:123] Gathering logs for kube-controller-manager [6e40c464d81d] ...
	I1205 11:05:08.180795   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e40c464d81d"
	I1205 11:05:10.698290   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:05:15.701082   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:05:15.701681   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:05:15.741271   11137 logs.go:282] 2 containers: [262c3ed215cb b4ab67c9a319]
	I1205 11:05:15.741425   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:05:15.763056   11137 logs.go:282] 2 containers: [958667a8aed1 d83e8d46af5a]
	I1205 11:05:15.763182   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:05:15.780258   11137 logs.go:282] 1 containers: [8a5ec0469f49]
	I1205 11:05:15.780349   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:05:15.792332   11137 logs.go:282] 2 containers: [40c6e6634eaf fb5ba8ab7ba0]
	I1205 11:05:15.792414   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:05:15.802933   11137 logs.go:282] 1 containers: [62422ea4292b]
	I1205 11:05:15.803006   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:05:15.815130   11137 logs.go:282] 2 containers: [ee1f0118663e 6e40c464d81d]
	I1205 11:05:15.815206   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:05:15.825376   11137 logs.go:282] 0 containers: []
	W1205 11:05:15.825390   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:05:15.825446   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:05:15.836143   11137 logs.go:282] 2 containers: [e5971fc04e44 904e682b96d7]
	I1205 11:05:15.836159   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:05:15.836165   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:05:15.870767   11137 logs.go:123] Gathering logs for kube-proxy [62422ea4292b] ...
	I1205 11:05:15.870778   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62422ea4292b"
	I1205 11:05:15.883204   11137 logs.go:123] Gathering logs for kube-controller-manager [ee1f0118663e] ...
	I1205 11:05:15.883216   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee1f0118663e"
	I1205 11:05:15.907505   11137 logs.go:123] Gathering logs for storage-provisioner [e5971fc04e44] ...
	I1205 11:05:15.907518   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5971fc04e44"
	I1205 11:05:15.919403   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:05:15.919419   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:05:15.923906   11137 logs.go:123] Gathering logs for etcd [958667a8aed1] ...
	I1205 11:05:15.923912   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 958667a8aed1"
	I1205 11:05:15.938200   11137 logs.go:123] Gathering logs for etcd [d83e8d46af5a] ...
	I1205 11:05:15.938212   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d83e8d46af5a"
	I1205 11:05:15.952845   11137 logs.go:123] Gathering logs for coredns [8a5ec0469f49] ...
	I1205 11:05:15.952854   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a5ec0469f49"
	I1205 11:05:15.964194   11137 logs.go:123] Gathering logs for kube-scheduler [fb5ba8ab7ba0] ...
	I1205 11:05:15.964206   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb5ba8ab7ba0"
	I1205 11:05:15.978983   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:05:15.978994   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:05:16.003689   11137 logs.go:123] Gathering logs for kube-scheduler [40c6e6634eaf] ...
	I1205 11:05:16.003699   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c6e6634eaf"
	I1205 11:05:16.015741   11137 logs.go:123] Gathering logs for storage-provisioner [904e682b96d7] ...
	I1205 11:05:16.015755   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904e682b96d7"
	I1205 11:05:16.027303   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:05:16.027314   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:05:16.039835   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:05:16.039847   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:05:16.079708   11137 logs.go:123] Gathering logs for kube-apiserver [262c3ed215cb] ...
	I1205 11:05:16.079718   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262c3ed215cb"
	I1205 11:05:16.094005   11137 logs.go:123] Gathering logs for kube-apiserver [b4ab67c9a319] ...
	I1205 11:05:16.094019   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4ab67c9a319"
	I1205 11:05:16.129894   11137 logs.go:123] Gathering logs for kube-controller-manager [6e40c464d81d] ...
	I1205 11:05:16.129904   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e40c464d81d"
	I1205 11:05:18.653942   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:05:23.656689   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:05:23.657264   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:05:23.696395   11137 logs.go:282] 2 containers: [262c3ed215cb b4ab67c9a319]
	I1205 11:05:23.696544   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:05:23.718296   11137 logs.go:282] 2 containers: [958667a8aed1 d83e8d46af5a]
	I1205 11:05:23.718421   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:05:23.734126   11137 logs.go:282] 1 containers: [8a5ec0469f49]
	I1205 11:05:23.734215   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:05:23.746694   11137 logs.go:282] 2 containers: [40c6e6634eaf fb5ba8ab7ba0]
	I1205 11:05:23.746788   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:05:23.757727   11137 logs.go:282] 1 containers: [62422ea4292b]
	I1205 11:05:23.757801   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:05:23.768429   11137 logs.go:282] 2 containers: [ee1f0118663e 6e40c464d81d]
	I1205 11:05:23.768502   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:05:23.778693   11137 logs.go:282] 0 containers: []
	W1205 11:05:23.778707   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:05:23.778774   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:05:23.793069   11137 logs.go:282] 2 containers: [e5971fc04e44 904e682b96d7]
	I1205 11:05:23.793088   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:05:23.793094   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:05:23.797991   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:05:23.797999   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:05:23.809400   11137 logs.go:123] Gathering logs for etcd [958667a8aed1] ...
	I1205 11:05:23.809411   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 958667a8aed1"
	I1205 11:05:23.823624   11137 logs.go:123] Gathering logs for kube-controller-manager [ee1f0118663e] ...
	I1205 11:05:23.823636   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee1f0118663e"
	I1205 11:05:23.847795   11137 logs.go:123] Gathering logs for kube-controller-manager [6e40c464d81d] ...
	I1205 11:05:23.847809   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e40c464d81d"
	I1205 11:05:23.863115   11137 logs.go:123] Gathering logs for storage-provisioner [904e682b96d7] ...
	I1205 11:05:23.863124   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904e682b96d7"
	I1205 11:05:23.874328   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:05:23.874339   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:05:23.916337   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:05:23.916344   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:05:23.951378   11137 logs.go:123] Gathering logs for etcd [d83e8d46af5a] ...
	I1205 11:05:23.951393   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d83e8d46af5a"
	I1205 11:05:23.966569   11137 logs.go:123] Gathering logs for kube-scheduler [fb5ba8ab7ba0] ...
	I1205 11:05:23.966582   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb5ba8ab7ba0"
	I1205 11:05:23.982344   11137 logs.go:123] Gathering logs for kube-proxy [62422ea4292b] ...
	I1205 11:05:23.982352   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62422ea4292b"
	I1205 11:05:23.994076   11137 logs.go:123] Gathering logs for storage-provisioner [e5971fc04e44] ...
	I1205 11:05:23.994088   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5971fc04e44"
	I1205 11:05:24.005839   11137 logs.go:123] Gathering logs for kube-apiserver [262c3ed215cb] ...
	I1205 11:05:24.005853   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262c3ed215cb"
	I1205 11:05:24.020344   11137 logs.go:123] Gathering logs for kube-apiserver [b4ab67c9a319] ...
	I1205 11:05:24.020353   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4ab67c9a319"
	I1205 11:05:24.049070   11137 logs.go:123] Gathering logs for coredns [8a5ec0469f49] ...
	I1205 11:05:24.049082   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a5ec0469f49"
	I1205 11:05:24.060573   11137 logs.go:123] Gathering logs for kube-scheduler [40c6e6634eaf] ...
	I1205 11:05:24.060583   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c6e6634eaf"
	I1205 11:05:24.071873   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:05:24.071883   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:05:26.598661   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:05:31.601452   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:05:31.602040   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:05:31.643707   11137 logs.go:282] 2 containers: [262c3ed215cb b4ab67c9a319]
	I1205 11:05:31.643870   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:05:31.665292   11137 logs.go:282] 2 containers: [958667a8aed1 d83e8d46af5a]
	I1205 11:05:31.665406   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:05:31.681020   11137 logs.go:282] 1 containers: [8a5ec0469f49]
	I1205 11:05:31.681113   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:05:31.693038   11137 logs.go:282] 2 containers: [40c6e6634eaf fb5ba8ab7ba0]
	I1205 11:05:31.693120   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:05:31.704299   11137 logs.go:282] 1 containers: [62422ea4292b]
	I1205 11:05:31.704376   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:05:31.714651   11137 logs.go:282] 2 containers: [ee1f0118663e 6e40c464d81d]
	I1205 11:05:31.714721   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:05:31.726225   11137 logs.go:282] 0 containers: []
	W1205 11:05:31.726241   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:05:31.726310   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:05:31.736796   11137 logs.go:282] 2 containers: [e5971fc04e44 904e682b96d7]
	I1205 11:05:31.736814   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:05:31.736820   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:05:31.741384   11137 logs.go:123] Gathering logs for kube-apiserver [262c3ed215cb] ...
	I1205 11:05:31.741389   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262c3ed215cb"
	I1205 11:05:31.755347   11137 logs.go:123] Gathering logs for kube-scheduler [fb5ba8ab7ba0] ...
	I1205 11:05:31.755358   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb5ba8ab7ba0"
	I1205 11:05:31.770006   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:05:31.770019   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:05:31.795387   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:05:31.795396   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:05:31.837422   11137 logs.go:123] Gathering logs for kube-apiserver [b4ab67c9a319] ...
	I1205 11:05:31.837432   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4ab67c9a319"
	I1205 11:05:31.865759   11137 logs.go:123] Gathering logs for etcd [d83e8d46af5a] ...
	I1205 11:05:31.865772   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d83e8d46af5a"
	I1205 11:05:31.880762   11137 logs.go:123] Gathering logs for kube-scheduler [40c6e6634eaf] ...
	I1205 11:05:31.880775   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c6e6634eaf"
	I1205 11:05:31.892305   11137 logs.go:123] Gathering logs for kube-controller-manager [ee1f0118663e] ...
	I1205 11:05:31.892317   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee1f0118663e"
	I1205 11:05:31.913862   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:05:31.913873   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:05:31.948091   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:05:31.948102   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:05:31.960062   11137 logs.go:123] Gathering logs for etcd [958667a8aed1] ...
	I1205 11:05:31.960076   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 958667a8aed1"
	I1205 11:05:31.974059   11137 logs.go:123] Gathering logs for coredns [8a5ec0469f49] ...
	I1205 11:05:31.974071   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a5ec0469f49"
	I1205 11:05:31.985121   11137 logs.go:123] Gathering logs for kube-proxy [62422ea4292b] ...
	I1205 11:05:31.985133   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62422ea4292b"
	I1205 11:05:31.999272   11137 logs.go:123] Gathering logs for kube-controller-manager [6e40c464d81d] ...
	I1205 11:05:31.999285   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e40c464d81d"
	I1205 11:05:32.014089   11137 logs.go:123] Gathering logs for storage-provisioner [e5971fc04e44] ...
	I1205 11:05:32.014101   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5971fc04e44"
	I1205 11:05:32.026290   11137 logs.go:123] Gathering logs for storage-provisioner [904e682b96d7] ...
	I1205 11:05:32.026302   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904e682b96d7"
	I1205 11:05:34.539431   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:05:39.542219   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:05:39.542805   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:05:39.587321   11137 logs.go:282] 2 containers: [262c3ed215cb b4ab67c9a319]
	I1205 11:05:39.587477   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:05:39.606737   11137 logs.go:282] 2 containers: [958667a8aed1 d83e8d46af5a]
	I1205 11:05:39.606869   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:05:39.621523   11137 logs.go:282] 1 containers: [8a5ec0469f49]
	I1205 11:05:39.621614   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:05:39.633741   11137 logs.go:282] 2 containers: [40c6e6634eaf fb5ba8ab7ba0]
	I1205 11:05:39.633824   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:05:39.645004   11137 logs.go:282] 1 containers: [62422ea4292b]
	I1205 11:05:39.645074   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:05:39.655494   11137 logs.go:282] 2 containers: [ee1f0118663e 6e40c464d81d]
	I1205 11:05:39.655561   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:05:39.665984   11137 logs.go:282] 0 containers: []
	W1205 11:05:39.665997   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:05:39.666065   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:05:39.676173   11137 logs.go:282] 2 containers: [e5971fc04e44 904e682b96d7]
	I1205 11:05:39.676195   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:05:39.676201   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:05:39.718150   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:05:39.718161   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:05:39.752880   11137 logs.go:123] Gathering logs for etcd [d83e8d46af5a] ...
	I1205 11:05:39.752892   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d83e8d46af5a"
	I1205 11:05:39.771108   11137 logs.go:123] Gathering logs for kube-scheduler [fb5ba8ab7ba0] ...
	I1205 11:05:39.771120   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb5ba8ab7ba0"
	I1205 11:05:39.785900   11137 logs.go:123] Gathering logs for kube-controller-manager [ee1f0118663e] ...
	I1205 11:05:39.785912   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee1f0118663e"
	I1205 11:05:39.803097   11137 logs.go:123] Gathering logs for kube-apiserver [b4ab67c9a319] ...
	I1205 11:05:39.803106   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4ab67c9a319"
	I1205 11:05:39.831232   11137 logs.go:123] Gathering logs for storage-provisioner [904e682b96d7] ...
	I1205 11:05:39.831247   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904e682b96d7"
	I1205 11:05:39.842583   11137 logs.go:123] Gathering logs for kube-scheduler [40c6e6634eaf] ...
	I1205 11:05:39.842598   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c6e6634eaf"
	I1205 11:05:39.854482   11137 logs.go:123] Gathering logs for kube-proxy [62422ea4292b] ...
	I1205 11:05:39.854496   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62422ea4292b"
	I1205 11:05:39.866164   11137 logs.go:123] Gathering logs for storage-provisioner [e5971fc04e44] ...
	I1205 11:05:39.866177   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5971fc04e44"
	I1205 11:05:39.877462   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:05:39.877475   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:05:39.902920   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:05:39.902931   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:05:39.907163   11137 logs.go:123] Gathering logs for kube-apiserver [262c3ed215cb] ...
	I1205 11:05:39.907172   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262c3ed215cb"
	I1205 11:05:39.923230   11137 logs.go:123] Gathering logs for etcd [958667a8aed1] ...
	I1205 11:05:39.923243   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 958667a8aed1"
	I1205 11:05:39.936436   11137 logs.go:123] Gathering logs for coredns [8a5ec0469f49] ...
	I1205 11:05:39.936446   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a5ec0469f49"
	I1205 11:05:39.947553   11137 logs.go:123] Gathering logs for kube-controller-manager [6e40c464d81d] ...
	I1205 11:05:39.947564   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e40c464d81d"
	I1205 11:05:39.962257   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:05:39.962269   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:05:42.477206   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:05:47.479438   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:05:47.479765   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:05:47.508657   11137 logs.go:282] 2 containers: [262c3ed215cb b4ab67c9a319]
	I1205 11:05:47.508815   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:05:47.526381   11137 logs.go:282] 2 containers: [958667a8aed1 d83e8d46af5a]
	I1205 11:05:47.526484   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:05:47.539699   11137 logs.go:282] 1 containers: [8a5ec0469f49]
	I1205 11:05:47.539773   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:05:47.551602   11137 logs.go:282] 2 containers: [40c6e6634eaf fb5ba8ab7ba0]
	I1205 11:05:47.551677   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:05:47.561672   11137 logs.go:282] 1 containers: [62422ea4292b]
	I1205 11:05:47.561746   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:05:47.572476   11137 logs.go:282] 2 containers: [ee1f0118663e 6e40c464d81d]
	I1205 11:05:47.572540   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:05:47.582693   11137 logs.go:282] 0 containers: []
	W1205 11:05:47.582705   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:05:47.582768   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:05:47.597105   11137 logs.go:282] 2 containers: [e5971fc04e44 904e682b96d7]
	I1205 11:05:47.597123   11137 logs.go:123] Gathering logs for etcd [958667a8aed1] ...
	I1205 11:05:47.597129   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 958667a8aed1"
	I1205 11:05:47.618199   11137 logs.go:123] Gathering logs for kube-controller-manager [ee1f0118663e] ...
	I1205 11:05:47.618209   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee1f0118663e"
	I1205 11:05:47.635179   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:05:47.635189   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:05:47.660730   11137 logs.go:123] Gathering logs for etcd [d83e8d46af5a] ...
	I1205 11:05:47.660738   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d83e8d46af5a"
	I1205 11:05:47.675257   11137 logs.go:123] Gathering logs for storage-provisioner [e5971fc04e44] ...
	I1205 11:05:47.675269   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5971fc04e44"
	I1205 11:05:47.686659   11137 logs.go:123] Gathering logs for storage-provisioner [904e682b96d7] ...
	I1205 11:05:47.686670   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904e682b96d7"
	I1205 11:05:47.697776   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:05:47.697785   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:05:47.702031   11137 logs.go:123] Gathering logs for kube-apiserver [262c3ed215cb] ...
	I1205 11:05:47.702038   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262c3ed215cb"
	I1205 11:05:47.716272   11137 logs.go:123] Gathering logs for coredns [8a5ec0469f49] ...
	I1205 11:05:47.716285   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a5ec0469f49"
	I1205 11:05:47.727556   11137 logs.go:123] Gathering logs for kube-scheduler [fb5ba8ab7ba0] ...
	I1205 11:05:47.727568   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb5ba8ab7ba0"
	I1205 11:05:47.742313   11137 logs.go:123] Gathering logs for kube-proxy [62422ea4292b] ...
	I1205 11:05:47.742322   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62422ea4292b"
	I1205 11:05:47.755540   11137 logs.go:123] Gathering logs for kube-controller-manager [6e40c464d81d] ...
	I1205 11:05:47.755551   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e40c464d81d"
	I1205 11:05:47.771106   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:05:47.771115   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:05:47.812723   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:05:47.812729   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:05:47.848144   11137 logs.go:123] Gathering logs for kube-apiserver [b4ab67c9a319] ...
	I1205 11:05:47.848158   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4ab67c9a319"
	I1205 11:05:47.875774   11137 logs.go:123] Gathering logs for kube-scheduler [40c6e6634eaf] ...
	I1205 11:05:47.875786   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c6e6634eaf"
	I1205 11:05:47.888071   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:05:47.888083   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:05:50.402171   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:05:55.404388   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:05:55.404504   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:05:55.416068   11137 logs.go:282] 2 containers: [262c3ed215cb b4ab67c9a319]
	I1205 11:05:55.416148   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:05:55.432803   11137 logs.go:282] 2 containers: [958667a8aed1 d83e8d46af5a]
	I1205 11:05:55.432875   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:05:55.443514   11137 logs.go:282] 1 containers: [8a5ec0469f49]
	I1205 11:05:55.443582   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:05:55.458386   11137 logs.go:282] 2 containers: [40c6e6634eaf fb5ba8ab7ba0]
	I1205 11:05:55.458468   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:05:55.468546   11137 logs.go:282] 1 containers: [62422ea4292b]
	I1205 11:05:55.468612   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:05:55.479180   11137 logs.go:282] 2 containers: [ee1f0118663e 6e40c464d81d]
	I1205 11:05:55.479258   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:05:55.489930   11137 logs.go:282] 0 containers: []
	W1205 11:05:55.489940   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:05:55.490002   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:05:55.504789   11137 logs.go:282] 2 containers: [e5971fc04e44 904e682b96d7]
	I1205 11:05:55.504808   11137 logs.go:123] Gathering logs for kube-apiserver [262c3ed215cb] ...
	I1205 11:05:55.504813   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262c3ed215cb"
	I1205 11:05:55.519844   11137 logs.go:123] Gathering logs for kube-proxy [62422ea4292b] ...
	I1205 11:05:55.519854   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62422ea4292b"
	I1205 11:05:55.531371   11137 logs.go:123] Gathering logs for storage-provisioner [904e682b96d7] ...
	I1205 11:05:55.531384   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904e682b96d7"
	I1205 11:05:55.542584   11137 logs.go:123] Gathering logs for kube-scheduler [fb5ba8ab7ba0] ...
	I1205 11:05:55.542596   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb5ba8ab7ba0"
	I1205 11:05:55.558492   11137 logs.go:123] Gathering logs for kube-controller-manager [6e40c464d81d] ...
	I1205 11:05:55.558501   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e40c464d81d"
	I1205 11:05:55.573984   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:05:55.573996   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:05:55.585894   11137 logs.go:123] Gathering logs for etcd [d83e8d46af5a] ...
	I1205 11:05:55.585908   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d83e8d46af5a"
	I1205 11:05:55.600854   11137 logs.go:123] Gathering logs for coredns [8a5ec0469f49] ...
	I1205 11:05:55.600864   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a5ec0469f49"
	I1205 11:05:55.615774   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:05:55.615784   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:05:55.640246   11137 logs.go:123] Gathering logs for kube-apiserver [b4ab67c9a319] ...
	I1205 11:05:55.640253   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4ab67c9a319"
	I1205 11:05:55.667950   11137 logs.go:123] Gathering logs for etcd [958667a8aed1] ...
	I1205 11:05:55.667963   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 958667a8aed1"
	I1205 11:05:55.682180   11137 logs.go:123] Gathering logs for kube-scheduler [40c6e6634eaf] ...
	I1205 11:05:55.682190   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c6e6634eaf"
	I1205 11:05:55.693855   11137 logs.go:123] Gathering logs for kube-controller-manager [ee1f0118663e] ...
	I1205 11:05:55.693868   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee1f0118663e"
	I1205 11:05:55.710736   11137 logs.go:123] Gathering logs for storage-provisioner [e5971fc04e44] ...
	I1205 11:05:55.710746   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5971fc04e44"
	I1205 11:05:55.727040   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:05:55.727050   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:05:55.768748   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:05:55.768761   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:05:55.773072   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:05:55.773080   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:05:58.312019   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:06:03.314663   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:06:03.314870   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:06:03.334468   11137 logs.go:282] 2 containers: [262c3ed215cb b4ab67c9a319]
	I1205 11:06:03.334558   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:06:03.355469   11137 logs.go:282] 2 containers: [958667a8aed1 d83e8d46af5a]
	I1205 11:06:03.355553   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:06:03.366771   11137 logs.go:282] 1 containers: [8a5ec0469f49]
	I1205 11:06:03.366841   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:06:03.378418   11137 logs.go:282] 2 containers: [40c6e6634eaf fb5ba8ab7ba0]
	I1205 11:06:03.378497   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:06:03.389481   11137 logs.go:282] 1 containers: [62422ea4292b]
	I1205 11:06:03.389554   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:06:03.400826   11137 logs.go:282] 2 containers: [ee1f0118663e 6e40c464d81d]
	I1205 11:06:03.400902   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:06:03.411743   11137 logs.go:282] 0 containers: []
	W1205 11:06:03.411754   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:06:03.411821   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:06:03.422265   11137 logs.go:282] 2 containers: [e5971fc04e44 904e682b96d7]
	I1205 11:06:03.422284   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:06:03.422290   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:06:03.446855   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:06:03.446866   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:06:03.489865   11137 logs.go:123] Gathering logs for kube-apiserver [b4ab67c9a319] ...
	I1205 11:06:03.489877   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4ab67c9a319"
	I1205 11:06:03.520687   11137 logs.go:123] Gathering logs for coredns [8a5ec0469f49] ...
	I1205 11:06:03.520699   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a5ec0469f49"
	I1205 11:06:03.532378   11137 logs.go:123] Gathering logs for kube-controller-manager [6e40c464d81d] ...
	I1205 11:06:03.532391   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e40c464d81d"
	I1205 11:06:03.547721   11137 logs.go:123] Gathering logs for storage-provisioner [904e682b96d7] ...
	I1205 11:06:03.547735   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904e682b96d7"
	I1205 11:06:03.559600   11137 logs.go:123] Gathering logs for etcd [d83e8d46af5a] ...
	I1205 11:06:03.559610   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d83e8d46af5a"
	I1205 11:06:03.574284   11137 logs.go:123] Gathering logs for kube-scheduler [40c6e6634eaf] ...
	I1205 11:06:03.574292   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c6e6634eaf"
	I1205 11:06:03.585924   11137 logs.go:123] Gathering logs for storage-provisioner [e5971fc04e44] ...
	I1205 11:06:03.585936   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5971fc04e44"
	I1205 11:06:03.597407   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:06:03.597419   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:06:03.609324   11137 logs.go:123] Gathering logs for kube-apiserver [262c3ed215cb] ...
	I1205 11:06:03.609335   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262c3ed215cb"
	I1205 11:06:03.623638   11137 logs.go:123] Gathering logs for kube-scheduler [fb5ba8ab7ba0] ...
	I1205 11:06:03.623650   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb5ba8ab7ba0"
	I1205 11:06:03.638788   11137 logs.go:123] Gathering logs for kube-proxy [62422ea4292b] ...
	I1205 11:06:03.638797   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62422ea4292b"
	I1205 11:06:03.650855   11137 logs.go:123] Gathering logs for kube-controller-manager [ee1f0118663e] ...
	I1205 11:06:03.650866   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee1f0118663e"
	I1205 11:06:03.669219   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:06:03.669228   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:06:03.673706   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:06:03.673711   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:06:03.709223   11137 logs.go:123] Gathering logs for etcd [958667a8aed1] ...
	I1205 11:06:03.709235   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 958667a8aed1"
	I1205 11:06:06.225953   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:06:11.228250   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:06:11.228862   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:06:11.268244   11137 logs.go:282] 2 containers: [262c3ed215cb b4ab67c9a319]
	I1205 11:06:11.268393   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:06:11.303932   11137 logs.go:282] 2 containers: [958667a8aed1 d83e8d46af5a]
	I1205 11:06:11.304028   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:06:11.317020   11137 logs.go:282] 1 containers: [8a5ec0469f49]
	I1205 11:06:11.317101   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:06:11.328210   11137 logs.go:282] 2 containers: [40c6e6634eaf fb5ba8ab7ba0]
	I1205 11:06:11.328278   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:06:11.338892   11137 logs.go:282] 1 containers: [62422ea4292b]
	I1205 11:06:11.338972   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:06:11.351488   11137 logs.go:282] 2 containers: [ee1f0118663e 6e40c464d81d]
	I1205 11:06:11.351568   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:06:11.361653   11137 logs.go:282] 0 containers: []
	W1205 11:06:11.361667   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:06:11.361735   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:06:11.377976   11137 logs.go:282] 2 containers: [e5971fc04e44 904e682b96d7]
	I1205 11:06:11.377993   11137 logs.go:123] Gathering logs for kube-apiserver [b4ab67c9a319] ...
	I1205 11:06:11.378000   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4ab67c9a319"
	I1205 11:06:11.407526   11137 logs.go:123] Gathering logs for etcd [958667a8aed1] ...
	I1205 11:06:11.407538   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 958667a8aed1"
	I1205 11:06:11.425726   11137 logs.go:123] Gathering logs for etcd [d83e8d46af5a] ...
	I1205 11:06:11.425739   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d83e8d46af5a"
	I1205 11:06:11.440342   11137 logs.go:123] Gathering logs for kube-scheduler [40c6e6634eaf] ...
	I1205 11:06:11.440354   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c6e6634eaf"
	I1205 11:06:11.461827   11137 logs.go:123] Gathering logs for kube-controller-manager [6e40c464d81d] ...
	I1205 11:06:11.461839   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e40c464d81d"
	I1205 11:06:11.476939   11137 logs.go:123] Gathering logs for storage-provisioner [e5971fc04e44] ...
	I1205 11:06:11.476953   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5971fc04e44"
	I1205 11:06:11.488225   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:06:11.488235   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:06:11.502054   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:06:11.502066   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:06:11.506767   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:06:11.506776   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:06:11.541407   11137 logs.go:123] Gathering logs for kube-proxy [62422ea4292b] ...
	I1205 11:06:11.541420   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62422ea4292b"
	I1205 11:06:11.553362   11137 logs.go:123] Gathering logs for kube-controller-manager [ee1f0118663e] ...
	I1205 11:06:11.553375   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee1f0118663e"
	I1205 11:06:11.572092   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:06:11.572104   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:06:11.598141   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:06:11.598149   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:06:11.637421   11137 logs.go:123] Gathering logs for kube-apiserver [262c3ed215cb] ...
	I1205 11:06:11.637429   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262c3ed215cb"
	I1205 11:06:11.653965   11137 logs.go:123] Gathering logs for coredns [8a5ec0469f49] ...
	I1205 11:06:11.653975   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a5ec0469f49"
	I1205 11:06:11.665145   11137 logs.go:123] Gathering logs for kube-scheduler [fb5ba8ab7ba0] ...
	I1205 11:06:11.665154   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb5ba8ab7ba0"
	I1205 11:06:11.682968   11137 logs.go:123] Gathering logs for storage-provisioner [904e682b96d7] ...
	I1205 11:06:11.682979   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904e682b96d7"
	I1205 11:06:14.195420   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:06:19.196251   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:06:19.196447   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:06:19.210368   11137 logs.go:282] 2 containers: [262c3ed215cb b4ab67c9a319]
	I1205 11:06:19.210451   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:06:19.221338   11137 logs.go:282] 2 containers: [958667a8aed1 d83e8d46af5a]
	I1205 11:06:19.221414   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:06:19.235508   11137 logs.go:282] 1 containers: [8a5ec0469f49]
	I1205 11:06:19.235577   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:06:19.253739   11137 logs.go:282] 2 containers: [40c6e6634eaf fb5ba8ab7ba0]
	I1205 11:06:19.253815   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:06:19.264995   11137 logs.go:282] 1 containers: [62422ea4292b]
	I1205 11:06:19.265079   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:06:19.279531   11137 logs.go:282] 2 containers: [ee1f0118663e 6e40c464d81d]
	I1205 11:06:19.279779   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:06:19.290914   11137 logs.go:282] 0 containers: []
	W1205 11:06:19.290926   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:06:19.290992   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:06:19.303311   11137 logs.go:282] 2 containers: [e5971fc04e44 904e682b96d7]
	I1205 11:06:19.303330   11137 logs.go:123] Gathering logs for storage-provisioner [e5971fc04e44] ...
	I1205 11:06:19.303336   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5971fc04e44"
	I1205 11:06:19.316204   11137 logs.go:123] Gathering logs for kube-apiserver [262c3ed215cb] ...
	I1205 11:06:19.316216   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262c3ed215cb"
	I1205 11:06:19.331582   11137 logs.go:123] Gathering logs for etcd [d83e8d46af5a] ...
	I1205 11:06:19.331595   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d83e8d46af5a"
	I1205 11:06:19.347668   11137 logs.go:123] Gathering logs for kube-scheduler [fb5ba8ab7ba0] ...
	I1205 11:06:19.347683   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb5ba8ab7ba0"
	I1205 11:06:19.364078   11137 logs.go:123] Gathering logs for kube-proxy [62422ea4292b] ...
	I1205 11:06:19.364092   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62422ea4292b"
	I1205 11:06:19.376990   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:06:19.377001   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:06:19.401043   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:06:19.401057   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:06:19.442073   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:06:19.442093   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:06:19.479309   11137 logs.go:123] Gathering logs for kube-apiserver [b4ab67c9a319] ...
	I1205 11:06:19.479321   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4ab67c9a319"
	I1205 11:06:19.508476   11137 logs.go:123] Gathering logs for coredns [8a5ec0469f49] ...
	I1205 11:06:19.508488   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a5ec0469f49"
	I1205 11:06:19.520224   11137 logs.go:123] Gathering logs for kube-scheduler [40c6e6634eaf] ...
	I1205 11:06:19.520237   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c6e6634eaf"
	I1205 11:06:19.532715   11137 logs.go:123] Gathering logs for kube-controller-manager [ee1f0118663e] ...
	I1205 11:06:19.532725   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee1f0118663e"
	I1205 11:06:19.550035   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:06:19.550044   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:06:19.554301   11137 logs.go:123] Gathering logs for kube-controller-manager [6e40c464d81d] ...
	I1205 11:06:19.554310   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e40c464d81d"
	I1205 11:06:19.569076   11137 logs.go:123] Gathering logs for storage-provisioner [904e682b96d7] ...
	I1205 11:06:19.569089   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904e682b96d7"
	I1205 11:06:19.581120   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:06:19.581131   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:06:19.593520   11137 logs.go:123] Gathering logs for etcd [958667a8aed1] ...
	I1205 11:06:19.593532   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 958667a8aed1"
	I1205 11:06:22.109760   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:06:27.111927   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:06:27.112065   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:06:27.124273   11137 logs.go:282] 2 containers: [262c3ed215cb b4ab67c9a319]
	I1205 11:06:27.124440   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:06:27.136913   11137 logs.go:282] 2 containers: [958667a8aed1 d83e8d46af5a]
	I1205 11:06:27.136986   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:06:27.148302   11137 logs.go:282] 1 containers: [8a5ec0469f49]
	I1205 11:06:27.148380   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:06:27.163664   11137 logs.go:282] 2 containers: [40c6e6634eaf fb5ba8ab7ba0]
	I1205 11:06:27.163756   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:06:27.177826   11137 logs.go:282] 1 containers: [62422ea4292b]
	I1205 11:06:27.177903   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:06:27.189774   11137 logs.go:282] 2 containers: [ee1f0118663e 6e40c464d81d]
	I1205 11:06:27.189857   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:06:27.201594   11137 logs.go:282] 0 containers: []
	W1205 11:06:27.201606   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:06:27.201678   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:06:27.212971   11137 logs.go:282] 2 containers: [e5971fc04e44 904e682b96d7]
	I1205 11:06:27.212991   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:06:27.212997   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:06:27.218161   11137 logs.go:123] Gathering logs for kube-controller-manager [ee1f0118663e] ...
	I1205 11:06:27.218179   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee1f0118663e"
	I1205 11:06:27.237163   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:06:27.237175   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:06:27.249979   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:06:27.249992   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:06:27.288625   11137 logs.go:123] Gathering logs for etcd [958667a8aed1] ...
	I1205 11:06:27.288638   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 958667a8aed1"
	I1205 11:06:27.306985   11137 logs.go:123] Gathering logs for kube-scheduler [fb5ba8ab7ba0] ...
	I1205 11:06:27.307000   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb5ba8ab7ba0"
	I1205 11:06:27.324729   11137 logs.go:123] Gathering logs for storage-provisioner [e5971fc04e44] ...
	I1205 11:06:27.324748   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5971fc04e44"
	I1205 11:06:27.338145   11137 logs.go:123] Gathering logs for kube-apiserver [262c3ed215cb] ...
	I1205 11:06:27.338160   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262c3ed215cb"
	I1205 11:06:27.354765   11137 logs.go:123] Gathering logs for coredns [8a5ec0469f49] ...
	I1205 11:06:27.354777   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a5ec0469f49"
	I1205 11:06:27.368157   11137 logs.go:123] Gathering logs for kube-scheduler [40c6e6634eaf] ...
	I1205 11:06:27.368170   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c6e6634eaf"
	I1205 11:06:27.381530   11137 logs.go:123] Gathering logs for kube-controller-manager [6e40c464d81d] ...
	I1205 11:06:27.381542   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e40c464d81d"
	I1205 11:06:27.408241   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:06:27.408252   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:06:27.451238   11137 logs.go:123] Gathering logs for kube-apiserver [b4ab67c9a319] ...
	I1205 11:06:27.451257   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4ab67c9a319"
	I1205 11:06:27.484874   11137 logs.go:123] Gathering logs for etcd [d83e8d46af5a] ...
	I1205 11:06:27.484893   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d83e8d46af5a"
	I1205 11:06:27.500840   11137 logs.go:123] Gathering logs for kube-proxy [62422ea4292b] ...
	I1205 11:06:27.500853   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62422ea4292b"
	I1205 11:06:27.522255   11137 logs.go:123] Gathering logs for storage-provisioner [904e682b96d7] ...
	I1205 11:06:27.522274   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904e682b96d7"
	I1205 11:06:27.534829   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:06:27.534845   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:06:30.062160   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:06:35.064502   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:06:35.064819   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:06:35.090909   11137 logs.go:282] 2 containers: [262c3ed215cb b4ab67c9a319]
	I1205 11:06:35.091047   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:06:35.108270   11137 logs.go:282] 2 containers: [958667a8aed1 d83e8d46af5a]
	I1205 11:06:35.108370   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:06:35.121980   11137 logs.go:282] 1 containers: [8a5ec0469f49]
	I1205 11:06:35.122082   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:06:35.134346   11137 logs.go:282] 2 containers: [40c6e6634eaf fb5ba8ab7ba0]
	I1205 11:06:35.134427   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:06:35.144602   11137 logs.go:282] 1 containers: [62422ea4292b]
	I1205 11:06:35.144685   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:06:35.155492   11137 logs.go:282] 2 containers: [ee1f0118663e 6e40c464d81d]
	I1205 11:06:35.155571   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:06:35.165495   11137 logs.go:282] 0 containers: []
	W1205 11:06:35.165511   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:06:35.165582   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:06:35.175559   11137 logs.go:282] 2 containers: [e5971fc04e44 904e682b96d7]
	I1205 11:06:35.175590   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:06:35.175595   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:06:35.211498   11137 logs.go:123] Gathering logs for etcd [d83e8d46af5a] ...
	I1205 11:06:35.211508   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d83e8d46af5a"
	I1205 11:06:35.225868   11137 logs.go:123] Gathering logs for kube-scheduler [40c6e6634eaf] ...
	I1205 11:06:35.225881   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c6e6634eaf"
	I1205 11:06:35.238092   11137 logs.go:123] Gathering logs for kube-controller-manager [ee1f0118663e] ...
	I1205 11:06:35.238103   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee1f0118663e"
	I1205 11:06:35.255596   11137 logs.go:123] Gathering logs for storage-provisioner [904e682b96d7] ...
	I1205 11:06:35.255609   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904e682b96d7"
	I1205 11:06:35.266685   11137 logs.go:123] Gathering logs for etcd [958667a8aed1] ...
	I1205 11:06:35.266698   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 958667a8aed1"
	I1205 11:06:35.279811   11137 logs.go:123] Gathering logs for kube-proxy [62422ea4292b] ...
	I1205 11:06:35.279824   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62422ea4292b"
	I1205 11:06:35.295418   11137 logs.go:123] Gathering logs for kube-controller-manager [6e40c464d81d] ...
	I1205 11:06:35.295431   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e40c464d81d"
	I1205 11:06:35.315113   11137 logs.go:123] Gathering logs for storage-provisioner [e5971fc04e44] ...
	I1205 11:06:35.315125   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5971fc04e44"
	I1205 11:06:35.331076   11137 logs.go:123] Gathering logs for kube-apiserver [b4ab67c9a319] ...
	I1205 11:06:35.331087   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4ab67c9a319"
	I1205 11:06:35.358476   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:06:35.358492   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:06:35.372058   11137 logs.go:123] Gathering logs for kube-scheduler [fb5ba8ab7ba0] ...
	I1205 11:06:35.372072   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb5ba8ab7ba0"
	I1205 11:06:35.387867   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:06:35.387876   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:06:35.413070   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:06:35.413079   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:06:35.465471   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:06:35.465481   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:06:35.469616   11137 logs.go:123] Gathering logs for kube-apiserver [262c3ed215cb] ...
	I1205 11:06:35.469623   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262c3ed215cb"
	I1205 11:06:35.482930   11137 logs.go:123] Gathering logs for coredns [8a5ec0469f49] ...
	I1205 11:06:35.482940   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a5ec0469f49"
	I1205 11:06:37.998448   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:06:43.001168   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:06:43.001661   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:06:43.038613   11137 logs.go:282] 2 containers: [262c3ed215cb b4ab67c9a319]
	I1205 11:06:43.038767   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:06:43.058654   11137 logs.go:282] 2 containers: [958667a8aed1 d83e8d46af5a]
	I1205 11:06:43.058821   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:06:43.077309   11137 logs.go:282] 1 containers: [8a5ec0469f49]
	I1205 11:06:43.077394   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:06:43.094428   11137 logs.go:282] 2 containers: [40c6e6634eaf fb5ba8ab7ba0]
	I1205 11:06:43.094513   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:06:43.105181   11137 logs.go:282] 1 containers: [62422ea4292b]
	I1205 11:06:43.105262   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:06:43.116050   11137 logs.go:282] 2 containers: [ee1f0118663e 6e40c464d81d]
	I1205 11:06:43.116138   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:06:43.126425   11137 logs.go:282] 0 containers: []
	W1205 11:06:43.126437   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:06:43.126509   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:06:43.137220   11137 logs.go:282] 2 containers: [e5971fc04e44 904e682b96d7]
	I1205 11:06:43.137241   11137 logs.go:123] Gathering logs for kube-controller-manager [ee1f0118663e] ...
	I1205 11:06:43.137245   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee1f0118663e"
	I1205 11:06:43.154778   11137 logs.go:123] Gathering logs for kube-controller-manager [6e40c464d81d] ...
	I1205 11:06:43.154787   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e40c464d81d"
	I1205 11:06:43.169628   11137 logs.go:123] Gathering logs for storage-provisioner [904e682b96d7] ...
	I1205 11:06:43.169639   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904e682b96d7"
	I1205 11:06:43.181188   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:06:43.181204   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:06:43.204395   11137 logs.go:123] Gathering logs for etcd [958667a8aed1] ...
	I1205 11:06:43.204402   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 958667a8aed1"
	I1205 11:06:43.220552   11137 logs.go:123] Gathering logs for kube-proxy [62422ea4292b] ...
	I1205 11:06:43.220565   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62422ea4292b"
	I1205 11:06:43.232845   11137 logs.go:123] Gathering logs for kube-apiserver [b4ab67c9a319] ...
	I1205 11:06:43.232856   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4ab67c9a319"
	I1205 11:06:43.261287   11137 logs.go:123] Gathering logs for etcd [d83e8d46af5a] ...
	I1205 11:06:43.261297   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d83e8d46af5a"
	I1205 11:06:43.275742   11137 logs.go:123] Gathering logs for storage-provisioner [e5971fc04e44] ...
	I1205 11:06:43.275752   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5971fc04e44"
	I1205 11:06:43.286979   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:06:43.286989   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:06:43.293821   11137 logs.go:123] Gathering logs for kube-scheduler [fb5ba8ab7ba0] ...
	I1205 11:06:43.293830   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb5ba8ab7ba0"
	I1205 11:06:43.312200   11137 logs.go:123] Gathering logs for kube-apiserver [262c3ed215cb] ...
	I1205 11:06:43.312211   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262c3ed215cb"
	I1205 11:06:43.326042   11137 logs.go:123] Gathering logs for coredns [8a5ec0469f49] ...
	I1205 11:06:43.326053   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a5ec0469f49"
	I1205 11:06:43.337338   11137 logs.go:123] Gathering logs for kube-scheduler [40c6e6634eaf] ...
	I1205 11:06:43.337350   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c6e6634eaf"
	I1205 11:06:43.349327   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:06:43.349339   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:06:43.360972   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:06:43.360985   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:06:43.400672   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:06:43.400682   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:06:45.937255   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:06:50.938484   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:06:50.938667   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:06:50.956476   11137 logs.go:282] 2 containers: [262c3ed215cb b4ab67c9a319]
	I1205 11:06:50.956556   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:06:50.967060   11137 logs.go:282] 2 containers: [958667a8aed1 d83e8d46af5a]
	I1205 11:06:50.967131   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:06:50.977607   11137 logs.go:282] 1 containers: [8a5ec0469f49]
	I1205 11:06:50.977684   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:06:50.988102   11137 logs.go:282] 2 containers: [40c6e6634eaf fb5ba8ab7ba0]
	I1205 11:06:50.988175   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:06:50.998672   11137 logs.go:282] 1 containers: [62422ea4292b]
	I1205 11:06:50.998754   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:06:51.009262   11137 logs.go:282] 2 containers: [ee1f0118663e 6e40c464d81d]
	I1205 11:06:51.009337   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:06:51.019682   11137 logs.go:282] 0 containers: []
	W1205 11:06:51.019693   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:06:51.019757   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:06:51.030614   11137 logs.go:282] 2 containers: [e5971fc04e44 904e682b96d7]
	I1205 11:06:51.030633   11137 logs.go:123] Gathering logs for etcd [958667a8aed1] ...
	I1205 11:06:51.030642   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 958667a8aed1"
	I1205 11:06:51.048359   11137 logs.go:123] Gathering logs for etcd [d83e8d46af5a] ...
	I1205 11:06:51.048370   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d83e8d46af5a"
	I1205 11:06:51.069907   11137 logs.go:123] Gathering logs for coredns [8a5ec0469f49] ...
	I1205 11:06:51.069918   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a5ec0469f49"
	I1205 11:06:51.080889   11137 logs.go:123] Gathering logs for storage-provisioner [e5971fc04e44] ...
	I1205 11:06:51.080899   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5971fc04e44"
	I1205 11:06:51.092367   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:06:51.092379   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:06:51.130933   11137 logs.go:123] Gathering logs for kube-apiserver [262c3ed215cb] ...
	I1205 11:06:51.130943   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262c3ed215cb"
	I1205 11:06:51.145901   11137 logs.go:123] Gathering logs for kube-apiserver [b4ab67c9a319] ...
	I1205 11:06:51.145911   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4ab67c9a319"
	I1205 11:06:51.173973   11137 logs.go:123] Gathering logs for kube-proxy [62422ea4292b] ...
	I1205 11:06:51.173984   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62422ea4292b"
	I1205 11:06:51.185648   11137 logs.go:123] Gathering logs for kube-controller-manager [ee1f0118663e] ...
	I1205 11:06:51.185679   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee1f0118663e"
	I1205 11:06:51.204389   11137 logs.go:123] Gathering logs for storage-provisioner [904e682b96d7] ...
	I1205 11:06:51.204399   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904e682b96d7"
	I1205 11:06:51.216005   11137 logs.go:123] Gathering logs for kube-scheduler [fb5ba8ab7ba0] ...
	I1205 11:06:51.216016   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb5ba8ab7ba0"
	I1205 11:06:51.231043   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:06:51.231054   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:06:51.247698   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:06:51.247710   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:06:51.291723   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:06:51.291733   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:06:51.296969   11137 logs.go:123] Gathering logs for kube-scheduler [40c6e6634eaf] ...
	I1205 11:06:51.296977   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c6e6634eaf"
	I1205 11:06:51.310355   11137 logs.go:123] Gathering logs for kube-controller-manager [6e40c464d81d] ...
	I1205 11:06:51.310364   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e40c464d81d"
	I1205 11:06:51.326002   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:06:51.326011   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:06:53.851758   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:06:58.853252   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:06:58.853799   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:06:58.892524   11137 logs.go:282] 2 containers: [262c3ed215cb b4ab67c9a319]
	I1205 11:06:58.892675   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:06:58.912288   11137 logs.go:282] 2 containers: [958667a8aed1 d83e8d46af5a]
	I1205 11:06:58.912402   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:06:58.929728   11137 logs.go:282] 1 containers: [8a5ec0469f49]
	I1205 11:06:58.929820   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:06:58.943208   11137 logs.go:282] 2 containers: [40c6e6634eaf fb5ba8ab7ba0]
	I1205 11:06:58.943293   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:06:58.953944   11137 logs.go:282] 1 containers: [62422ea4292b]
	I1205 11:06:58.954027   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:06:58.964400   11137 logs.go:282] 2 containers: [ee1f0118663e 6e40c464d81d]
	I1205 11:06:58.964470   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:06:58.979971   11137 logs.go:282] 0 containers: []
	W1205 11:06:58.979983   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:06:58.980053   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:06:58.990137   11137 logs.go:282] 2 containers: [e5971fc04e44 904e682b96d7]
	I1205 11:06:58.990154   11137 logs.go:123] Gathering logs for kube-controller-manager [ee1f0118663e] ...
	I1205 11:06:58.990161   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee1f0118663e"
	I1205 11:06:59.007465   11137 logs.go:123] Gathering logs for kube-controller-manager [6e40c464d81d] ...
	I1205 11:06:59.007476   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e40c464d81d"
	I1205 11:06:59.022773   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:06:59.022786   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:06:59.045824   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:06:59.045833   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:06:59.081216   11137 logs.go:123] Gathering logs for etcd [d83e8d46af5a] ...
	I1205 11:06:59.081229   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d83e8d46af5a"
	I1205 11:06:59.100797   11137 logs.go:123] Gathering logs for kube-scheduler [fb5ba8ab7ba0] ...
	I1205 11:06:59.100811   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb5ba8ab7ba0"
	I1205 11:06:59.115591   11137 logs.go:123] Gathering logs for storage-provisioner [e5971fc04e44] ...
	I1205 11:06:59.115604   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5971fc04e44"
	I1205 11:06:59.127609   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:06:59.127621   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:06:59.139802   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:06:59.139817   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:06:59.182277   11137 logs.go:123] Gathering logs for coredns [8a5ec0469f49] ...
	I1205 11:06:59.182288   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a5ec0469f49"
	I1205 11:06:59.193344   11137 logs.go:123] Gathering logs for storage-provisioner [904e682b96d7] ...
	I1205 11:06:59.193355   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904e682b96d7"
	I1205 11:06:59.204788   11137 logs.go:123] Gathering logs for etcd [958667a8aed1] ...
	I1205 11:06:59.204800   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 958667a8aed1"
	I1205 11:06:59.219135   11137 logs.go:123] Gathering logs for kube-proxy [62422ea4292b] ...
	I1205 11:06:59.219147   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62422ea4292b"
	I1205 11:06:59.235477   11137 logs.go:123] Gathering logs for kube-apiserver [b4ab67c9a319] ...
	I1205 11:06:59.235491   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4ab67c9a319"
	I1205 11:06:59.265083   11137 logs.go:123] Gathering logs for kube-scheduler [40c6e6634eaf] ...
	I1205 11:06:59.265094   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c6e6634eaf"
	I1205 11:06:59.277207   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:06:59.277218   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:06:59.281924   11137 logs.go:123] Gathering logs for kube-apiserver [262c3ed215cb] ...
	I1205 11:06:59.281931   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262c3ed215cb"
	I1205 11:07:01.799415   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:07:06.802011   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:07:06.802171   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:07:06.821305   11137 logs.go:282] 2 containers: [262c3ed215cb b4ab67c9a319]
	I1205 11:07:06.821384   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:07:06.834464   11137 logs.go:282] 2 containers: [958667a8aed1 d83e8d46af5a]
	I1205 11:07:06.834548   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:07:06.845334   11137 logs.go:282] 1 containers: [8a5ec0469f49]
	I1205 11:07:06.845407   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:07:06.857376   11137 logs.go:282] 2 containers: [40c6e6634eaf fb5ba8ab7ba0]
	I1205 11:07:06.857481   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:07:06.874189   11137 logs.go:282] 1 containers: [62422ea4292b]
	I1205 11:07:06.874255   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:07:06.890333   11137 logs.go:282] 2 containers: [ee1f0118663e 6e40c464d81d]
	I1205 11:07:06.890394   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:07:06.902576   11137 logs.go:282] 0 containers: []
	W1205 11:07:06.902590   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:07:06.902655   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:07:06.915244   11137 logs.go:282] 2 containers: [e5971fc04e44 904e682b96d7]
	I1205 11:07:06.915263   11137 logs.go:123] Gathering logs for storage-provisioner [e5971fc04e44] ...
	I1205 11:07:06.915269   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5971fc04e44"
	I1205 11:07:06.928222   11137 logs.go:123] Gathering logs for kube-controller-manager [6e40c464d81d] ...
	I1205 11:07:06.928233   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e40c464d81d"
	I1205 11:07:06.944007   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:07:06.944017   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:07:06.956225   11137 logs.go:123] Gathering logs for etcd [958667a8aed1] ...
	I1205 11:07:06.956241   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 958667a8aed1"
	I1205 11:07:06.970354   11137 logs.go:123] Gathering logs for coredns [8a5ec0469f49] ...
	I1205 11:07:06.970363   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a5ec0469f49"
	I1205 11:07:06.981789   11137 logs.go:123] Gathering logs for kube-scheduler [40c6e6634eaf] ...
	I1205 11:07:06.981804   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c6e6634eaf"
	I1205 11:07:06.993295   11137 logs.go:123] Gathering logs for kube-scheduler [fb5ba8ab7ba0] ...
	I1205 11:07:06.993308   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb5ba8ab7ba0"
	I1205 11:07:07.008362   11137 logs.go:123] Gathering logs for kube-proxy [62422ea4292b] ...
	I1205 11:07:07.008375   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62422ea4292b"
	I1205 11:07:07.021599   11137 logs.go:123] Gathering logs for kube-controller-manager [ee1f0118663e] ...
	I1205 11:07:07.021607   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee1f0118663e"
	I1205 11:07:07.040921   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:07:07.040932   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:07:07.064085   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:07:07.064105   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:07:07.106636   11137 logs.go:123] Gathering logs for kube-apiserver [262c3ed215cb] ...
	I1205 11:07:07.106653   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262c3ed215cb"
	I1205 11:07:07.121756   11137 logs.go:123] Gathering logs for kube-apiserver [b4ab67c9a319] ...
	I1205 11:07:07.121769   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4ab67c9a319"
	I1205 11:07:07.150466   11137 logs.go:123] Gathering logs for etcd [d83e8d46af5a] ...
	I1205 11:07:07.150480   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d83e8d46af5a"
	I1205 11:07:07.165208   11137 logs.go:123] Gathering logs for storage-provisioner [904e682b96d7] ...
	I1205 11:07:07.165219   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904e682b96d7"
	I1205 11:07:07.176581   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:07:07.176593   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:07:07.181446   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:07:07.181456   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:07:09.718680   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:07:14.720818   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:07:14.720951   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:07:14.733072   11137 logs.go:282] 2 containers: [262c3ed215cb b4ab67c9a319]
	I1205 11:07:14.733152   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:07:14.744867   11137 logs.go:282] 2 containers: [958667a8aed1 d83e8d46af5a]
	I1205 11:07:14.744948   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:07:14.756029   11137 logs.go:282] 1 containers: [8a5ec0469f49]
	I1205 11:07:14.756107   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:07:14.767350   11137 logs.go:282] 2 containers: [40c6e6634eaf fb5ba8ab7ba0]
	I1205 11:07:14.767434   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:07:14.779238   11137 logs.go:282] 1 containers: [62422ea4292b]
	I1205 11:07:14.779325   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:07:14.790926   11137 logs.go:282] 2 containers: [ee1f0118663e 6e40c464d81d]
	I1205 11:07:14.791014   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:07:14.801765   11137 logs.go:282] 0 containers: []
	W1205 11:07:14.801779   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:07:14.801848   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:07:14.812657   11137 logs.go:282] 2 containers: [e5971fc04e44 904e682b96d7]
	I1205 11:07:14.812675   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:07:14.812682   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:07:14.850309   11137 logs.go:123] Gathering logs for coredns [8a5ec0469f49] ...
	I1205 11:07:14.850324   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a5ec0469f49"
	I1205 11:07:14.870518   11137 logs.go:123] Gathering logs for kube-controller-manager [6e40c464d81d] ...
	I1205 11:07:14.870532   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e40c464d81d"
	I1205 11:07:14.913657   11137 logs.go:123] Gathering logs for kube-apiserver [b4ab67c9a319] ...
	I1205 11:07:14.913674   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4ab67c9a319"
	I1205 11:07:14.950198   11137 logs.go:123] Gathering logs for etcd [958667a8aed1] ...
	I1205 11:07:14.950214   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 958667a8aed1"
	I1205 11:07:14.965892   11137 logs.go:123] Gathering logs for kube-scheduler [fb5ba8ab7ba0] ...
	I1205 11:07:14.965904   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb5ba8ab7ba0"
	I1205 11:07:14.981548   11137 logs.go:123] Gathering logs for kube-controller-manager [ee1f0118663e] ...
	I1205 11:07:14.981567   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee1f0118663e"
	I1205 11:07:14.999809   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:07:14.999820   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:07:15.024494   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:07:15.024503   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:07:15.065981   11137 logs.go:123] Gathering logs for etcd [d83e8d46af5a] ...
	I1205 11:07:15.065998   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d83e8d46af5a"
	I1205 11:07:15.082083   11137 logs.go:123] Gathering logs for storage-provisioner [e5971fc04e44] ...
	I1205 11:07:15.082099   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5971fc04e44"
	I1205 11:07:15.096671   11137 logs.go:123] Gathering logs for storage-provisioner [904e682b96d7] ...
	I1205 11:07:15.096683   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904e682b96d7"
	I1205 11:07:15.108321   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:07:15.108334   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:07:15.120217   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:07:15.120229   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:07:15.124886   11137 logs.go:123] Gathering logs for kube-apiserver [262c3ed215cb] ...
	I1205 11:07:15.124897   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262c3ed215cb"
	I1205 11:07:15.138681   11137 logs.go:123] Gathering logs for kube-scheduler [40c6e6634eaf] ...
	I1205 11:07:15.138693   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c6e6634eaf"
	I1205 11:07:15.150784   11137 logs.go:123] Gathering logs for kube-proxy [62422ea4292b] ...
	I1205 11:07:15.150795   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62422ea4292b"
	I1205 11:07:17.664635   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:07:22.667117   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:07:22.667283   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:07:22.683873   11137 logs.go:282] 2 containers: [262c3ed215cb b4ab67c9a319]
	I1205 11:07:22.683964   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:07:22.697294   11137 logs.go:282] 2 containers: [958667a8aed1 d83e8d46af5a]
	I1205 11:07:22.697375   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:07:22.714720   11137 logs.go:282] 1 containers: [8a5ec0469f49]
	I1205 11:07:22.714798   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:07:22.728892   11137 logs.go:282] 2 containers: [40c6e6634eaf fb5ba8ab7ba0]
	I1205 11:07:22.728973   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:07:22.739813   11137 logs.go:282] 1 containers: [62422ea4292b]
	I1205 11:07:22.739902   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:07:22.750651   11137 logs.go:282] 2 containers: [ee1f0118663e 6e40c464d81d]
	I1205 11:07:22.750730   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:07:22.761150   11137 logs.go:282] 0 containers: []
	W1205 11:07:22.761163   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:07:22.761235   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:07:22.771735   11137 logs.go:282] 2 containers: [e5971fc04e44 904e682b96d7]
	I1205 11:07:22.771757   11137 logs.go:123] Gathering logs for storage-provisioner [904e682b96d7] ...
	I1205 11:07:22.771764   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904e682b96d7"
	I1205 11:07:22.783692   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:07:22.783707   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:07:22.820380   11137 logs.go:123] Gathering logs for etcd [958667a8aed1] ...
	I1205 11:07:22.820394   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 958667a8aed1"
	I1205 11:07:22.834714   11137 logs.go:123] Gathering logs for kube-scheduler [40c6e6634eaf] ...
	I1205 11:07:22.834728   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c6e6634eaf"
	I1205 11:07:22.851015   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:07:22.851025   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:07:22.893948   11137 logs.go:123] Gathering logs for kube-controller-manager [6e40c464d81d] ...
	I1205 11:07:22.893960   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e40c464d81d"
	I1205 11:07:22.909758   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:07:22.909771   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:07:22.932738   11137 logs.go:123] Gathering logs for storage-provisioner [e5971fc04e44] ...
	I1205 11:07:22.932746   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5971fc04e44"
	I1205 11:07:22.944239   11137 logs.go:123] Gathering logs for etcd [d83e8d46af5a] ...
	I1205 11:07:22.944249   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d83e8d46af5a"
	I1205 11:07:22.959840   11137 logs.go:123] Gathering logs for coredns [8a5ec0469f49] ...
	I1205 11:07:22.959853   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a5ec0469f49"
	I1205 11:07:22.970685   11137 logs.go:123] Gathering logs for kube-proxy [62422ea4292b] ...
	I1205 11:07:22.970697   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62422ea4292b"
	I1205 11:07:22.990288   11137 logs.go:123] Gathering logs for kube-scheduler [fb5ba8ab7ba0] ...
	I1205 11:07:22.990302   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb5ba8ab7ba0"
	I1205 11:07:23.008717   11137 logs.go:123] Gathering logs for kube-controller-manager [ee1f0118663e] ...
	I1205 11:07:23.008728   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee1f0118663e"
	I1205 11:07:23.026068   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:07:23.026078   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:07:23.038628   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:07:23.038642   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:07:23.043345   11137 logs.go:123] Gathering logs for kube-apiserver [262c3ed215cb] ...
	I1205 11:07:23.043352   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262c3ed215cb"
	I1205 11:07:23.058807   11137 logs.go:123] Gathering logs for kube-apiserver [b4ab67c9a319] ...
	I1205 11:07:23.058817   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4ab67c9a319"
	I1205 11:07:25.589867   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:07:30.592132   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:07:30.592364   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:07:30.608924   11137 logs.go:282] 2 containers: [262c3ed215cb b4ab67c9a319]
	I1205 11:07:30.609039   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:07:30.621868   11137 logs.go:282] 2 containers: [958667a8aed1 d83e8d46af5a]
	I1205 11:07:30.621953   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:07:30.632918   11137 logs.go:282] 1 containers: [8a5ec0469f49]
	I1205 11:07:30.632994   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:07:30.643495   11137 logs.go:282] 2 containers: [40c6e6634eaf fb5ba8ab7ba0]
	I1205 11:07:30.643579   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:07:30.654711   11137 logs.go:282] 1 containers: [62422ea4292b]
	I1205 11:07:30.654799   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:07:30.665361   11137 logs.go:282] 2 containers: [ee1f0118663e 6e40c464d81d]
	I1205 11:07:30.665437   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:07:30.675631   11137 logs.go:282] 0 containers: []
	W1205 11:07:30.675644   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:07:30.675717   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:07:30.686495   11137 logs.go:282] 2 containers: [e5971fc04e44 904e682b96d7]
	I1205 11:07:30.686516   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:07:30.686522   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:07:30.727126   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:07:30.727136   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:07:30.762416   11137 logs.go:123] Gathering logs for kube-scheduler [40c6e6634eaf] ...
	I1205 11:07:30.762427   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c6e6634eaf"
	I1205 11:07:30.774023   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:07:30.774036   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:07:30.778550   11137 logs.go:123] Gathering logs for kube-scheduler [fb5ba8ab7ba0] ...
	I1205 11:07:30.778560   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb5ba8ab7ba0"
	I1205 11:07:30.793344   11137 logs.go:123] Gathering logs for kube-apiserver [b4ab67c9a319] ...
	I1205 11:07:30.793354   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4ab67c9a319"
	I1205 11:07:30.829009   11137 logs.go:123] Gathering logs for kube-proxy [62422ea4292b] ...
	I1205 11:07:30.829021   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62422ea4292b"
	I1205 11:07:30.842730   11137 logs.go:123] Gathering logs for kube-controller-manager [ee1f0118663e] ...
	I1205 11:07:30.842745   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee1f0118663e"
	I1205 11:07:30.864893   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:07:30.864905   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:07:30.887264   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:07:30.887274   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:07:30.899864   11137 logs.go:123] Gathering logs for storage-provisioner [904e682b96d7] ...
	I1205 11:07:30.899877   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904e682b96d7"
	I1205 11:07:30.912168   11137 logs.go:123] Gathering logs for kube-apiserver [262c3ed215cb] ...
	I1205 11:07:30.912180   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262c3ed215cb"
	I1205 11:07:30.926732   11137 logs.go:123] Gathering logs for etcd [958667a8aed1] ...
	I1205 11:07:30.926746   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 958667a8aed1"
	I1205 11:07:30.940879   11137 logs.go:123] Gathering logs for etcd [d83e8d46af5a] ...
	I1205 11:07:30.940888   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d83e8d46af5a"
	I1205 11:07:30.955382   11137 logs.go:123] Gathering logs for coredns [8a5ec0469f49] ...
	I1205 11:07:30.955394   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a5ec0469f49"
	I1205 11:07:30.971260   11137 logs.go:123] Gathering logs for kube-controller-manager [6e40c464d81d] ...
	I1205 11:07:30.971273   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e40c464d81d"
	I1205 11:07:30.987591   11137 logs.go:123] Gathering logs for storage-provisioner [e5971fc04e44] ...
	I1205 11:07:30.987601   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5971fc04e44"
	I1205 11:07:33.500632   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:07:38.502782   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:07:38.502910   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:07:38.514251   11137 logs.go:282] 2 containers: [262c3ed215cb b4ab67c9a319]
	I1205 11:07:38.514334   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:07:38.525434   11137 logs.go:282] 2 containers: [958667a8aed1 d83e8d46af5a]
	I1205 11:07:38.525518   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:07:38.535783   11137 logs.go:282] 1 containers: [8a5ec0469f49]
	I1205 11:07:38.535866   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:07:38.549185   11137 logs.go:282] 2 containers: [40c6e6634eaf fb5ba8ab7ba0]
	I1205 11:07:38.549268   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:07:38.559657   11137 logs.go:282] 1 containers: [62422ea4292b]
	I1205 11:07:38.559738   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:07:38.570051   11137 logs.go:282] 2 containers: [ee1f0118663e 6e40c464d81d]
	I1205 11:07:38.570127   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:07:38.580216   11137 logs.go:282] 0 containers: []
	W1205 11:07:38.580231   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:07:38.580298   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:07:38.590940   11137 logs.go:282] 2 containers: [e5971fc04e44 904e682b96d7]
	I1205 11:07:38.590959   11137 logs.go:123] Gathering logs for storage-provisioner [e5971fc04e44] ...
	I1205 11:07:38.590965   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5971fc04e44"
	I1205 11:07:38.602269   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:07:38.602279   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:07:38.645092   11137 logs.go:123] Gathering logs for kube-apiserver [262c3ed215cb] ...
	I1205 11:07:38.645101   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262c3ed215cb"
	I1205 11:07:38.658973   11137 logs.go:123] Gathering logs for etcd [d83e8d46af5a] ...
	I1205 11:07:38.658984   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d83e8d46af5a"
	I1205 11:07:38.674039   11137 logs.go:123] Gathering logs for coredns [8a5ec0469f49] ...
	I1205 11:07:38.674051   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a5ec0469f49"
	I1205 11:07:38.685057   11137 logs.go:123] Gathering logs for kube-scheduler [40c6e6634eaf] ...
	I1205 11:07:38.685068   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c6e6634eaf"
	I1205 11:07:38.696779   11137 logs.go:123] Gathering logs for kube-controller-manager [6e40c464d81d] ...
	I1205 11:07:38.696790   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e40c464d81d"
	I1205 11:07:38.712742   11137 logs.go:123] Gathering logs for storage-provisioner [904e682b96d7] ...
	I1205 11:07:38.712755   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904e682b96d7"
	I1205 11:07:38.724492   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:07:38.724504   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:07:38.748809   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:07:38.748822   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:07:38.760518   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:07:38.760532   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:07:38.765048   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:07:38.765059   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:07:38.799473   11137 logs.go:123] Gathering logs for etcd [958667a8aed1] ...
	I1205 11:07:38.799485   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 958667a8aed1"
	I1205 11:07:38.816510   11137 logs.go:123] Gathering logs for kube-scheduler [fb5ba8ab7ba0] ...
	I1205 11:07:38.816522   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb5ba8ab7ba0"
	I1205 11:07:38.831892   11137 logs.go:123] Gathering logs for kube-proxy [62422ea4292b] ...
	I1205 11:07:38.831906   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62422ea4292b"
	I1205 11:07:38.844151   11137 logs.go:123] Gathering logs for kube-apiserver [b4ab67c9a319] ...
	I1205 11:07:38.844162   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4ab67c9a319"
	I1205 11:07:38.873572   11137 logs.go:123] Gathering logs for kube-controller-manager [ee1f0118663e] ...
	I1205 11:07:38.873585   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee1f0118663e"
	I1205 11:07:41.393837   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:07:46.396092   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:07:46.396236   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:07:46.408278   11137 logs.go:282] 2 containers: [262c3ed215cb b4ab67c9a319]
	I1205 11:07:46.408366   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:07:46.422106   11137 logs.go:282] 2 containers: [958667a8aed1 d83e8d46af5a]
	I1205 11:07:46.422189   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:07:46.432881   11137 logs.go:282] 1 containers: [8a5ec0469f49]
	I1205 11:07:46.432964   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:07:46.445067   11137 logs.go:282] 2 containers: [40c6e6634eaf fb5ba8ab7ba0]
	I1205 11:07:46.445145   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:07:46.455529   11137 logs.go:282] 1 containers: [62422ea4292b]
	I1205 11:07:46.455609   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:07:46.466058   11137 logs.go:282] 2 containers: [ee1f0118663e 6e40c464d81d]
	I1205 11:07:46.466127   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:07:46.476456   11137 logs.go:282] 0 containers: []
	W1205 11:07:46.476468   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:07:46.476538   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:07:46.487379   11137 logs.go:282] 2 containers: [e5971fc04e44 904e682b96d7]
	I1205 11:07:46.487398   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:07:46.487404   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:07:46.492211   11137 logs.go:123] Gathering logs for kube-scheduler [fb5ba8ab7ba0] ...
	I1205 11:07:46.492220   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb5ba8ab7ba0"
	I1205 11:07:46.506925   11137 logs.go:123] Gathering logs for kube-proxy [62422ea4292b] ...
	I1205 11:07:46.506935   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62422ea4292b"
	I1205 11:07:46.518613   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:07:46.518624   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:07:46.531118   11137 logs.go:123] Gathering logs for kube-apiserver [262c3ed215cb] ...
	I1205 11:07:46.531133   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262c3ed215cb"
	I1205 11:07:46.545017   11137 logs.go:123] Gathering logs for coredns [8a5ec0469f49] ...
	I1205 11:07:46.545026   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a5ec0469f49"
	I1205 11:07:46.556531   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:07:46.556543   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:07:46.579356   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:07:46.579368   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:07:46.621706   11137 logs.go:123] Gathering logs for kube-scheduler [40c6e6634eaf] ...
	I1205 11:07:46.621714   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c6e6634eaf"
	I1205 11:07:46.633492   11137 logs.go:123] Gathering logs for kube-controller-manager [6e40c464d81d] ...
	I1205 11:07:46.633503   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e40c464d81d"
	I1205 11:07:46.648690   11137 logs.go:123] Gathering logs for storage-provisioner [904e682b96d7] ...
	I1205 11:07:46.648700   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904e682b96d7"
	I1205 11:07:46.660328   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:07:46.660341   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:07:46.694965   11137 logs.go:123] Gathering logs for kube-apiserver [b4ab67c9a319] ...
	I1205 11:07:46.694976   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4ab67c9a319"
	I1205 11:07:46.723234   11137 logs.go:123] Gathering logs for etcd [958667a8aed1] ...
	I1205 11:07:46.723248   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 958667a8aed1"
	I1205 11:07:46.740809   11137 logs.go:123] Gathering logs for etcd [d83e8d46af5a] ...
	I1205 11:07:46.740818   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d83e8d46af5a"
	I1205 11:07:46.755364   11137 logs.go:123] Gathering logs for kube-controller-manager [ee1f0118663e] ...
	I1205 11:07:46.755375   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee1f0118663e"
	I1205 11:07:46.773313   11137 logs.go:123] Gathering logs for storage-provisioner [e5971fc04e44] ...
	I1205 11:07:46.773324   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5971fc04e44"
	I1205 11:07:49.286933   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:07:54.288522   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:07:54.288649   11137 kubeadm.go:597] duration metric: took 4m4.503536375s to restartPrimaryControlPlane
	W1205 11:07:54.288740   11137 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 11:07:54.288788   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1205 11:07:55.289060   11137 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.000269792s)
	I1205 11:07:55.289149   11137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 11:07:55.295101   11137 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 11:07:55.298264   11137 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 11:07:55.301176   11137 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 11:07:55.301183   11137 kubeadm.go:157] found existing configuration files:
	
	I1205 11:07:55.301214   11137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51775 /etc/kubernetes/admin.conf
	I1205 11:07:55.303839   11137 kubeadm.go:163] "https://control-plane.minikube.internal:51775" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51775 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 11:07:55.303875   11137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 11:07:55.307274   11137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51775 /etc/kubernetes/kubelet.conf
	I1205 11:07:55.310499   11137 kubeadm.go:163] "https://control-plane.minikube.internal:51775" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51775 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 11:07:55.310526   11137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 11:07:55.313205   11137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51775 /etc/kubernetes/controller-manager.conf
	I1205 11:07:55.315744   11137 kubeadm.go:163] "https://control-plane.minikube.internal:51775" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51775 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 11:07:55.315780   11137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 11:07:55.318994   11137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51775 /etc/kubernetes/scheduler.conf
	I1205 11:07:55.321938   11137 kubeadm.go:163] "https://control-plane.minikube.internal:51775" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51775 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 11:07:55.321970   11137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 11:07:55.324576   11137 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 11:07:55.341290   11137 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1205 11:07:55.341329   11137 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 11:07:55.399472   11137 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 11:07:55.399528   11137 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 11:07:55.399583   11137 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 11:07:55.448901   11137 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 11:07:55.452080   11137 out.go:235]   - Generating certificates and keys ...
	I1205 11:07:55.452113   11137 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 11:07:55.452146   11137 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 11:07:55.452194   11137 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 11:07:55.452235   11137 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 11:07:55.452275   11137 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 11:07:55.452305   11137 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 11:07:55.452341   11137 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 11:07:55.452379   11137 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 11:07:55.452420   11137 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 11:07:55.452460   11137 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 11:07:55.452481   11137 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 11:07:55.452509   11137 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 11:07:55.658905   11137 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 11:07:55.745409   11137 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 11:07:55.842588   11137 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 11:07:56.002390   11137 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 11:07:56.032349   11137 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 11:07:56.032758   11137 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 11:07:56.032851   11137 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 11:07:56.127252   11137 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 11:07:56.131259   11137 out.go:235]   - Booting up control plane ...
	I1205 11:07:56.131308   11137 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 11:07:56.131359   11137 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 11:07:56.131394   11137 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 11:07:56.132628   11137 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 11:07:56.133317   11137 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 11:08:01.135462   11137 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.001975 seconds
	I1205 11:08:01.135559   11137 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 11:08:01.141353   11137 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 11:08:01.651262   11137 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 11:08:01.651370   11137 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-829000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 11:08:02.156937   11137 kubeadm.go:310] [bootstrap-token] Using token: gxdypa.k5ak3nnpbvxiuq31
	I1205 11:08:02.163417   11137 out.go:235]   - Configuring RBAC rules ...
	I1205 11:08:02.163507   11137 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 11:08:02.166343   11137 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 11:08:02.174526   11137 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 11:08:02.175574   11137 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 11:08:02.176500   11137 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 11:08:02.177454   11137 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 11:08:02.180876   11137 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 11:08:02.343533   11137 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 11:08:02.568875   11137 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 11:08:02.569400   11137 kubeadm.go:310] 
	I1205 11:08:02.569433   11137 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 11:08:02.569437   11137 kubeadm.go:310] 
	I1205 11:08:02.569480   11137 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 11:08:02.569485   11137 kubeadm.go:310] 
	I1205 11:08:02.569499   11137 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 11:08:02.569537   11137 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 11:08:02.569563   11137 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 11:08:02.569568   11137 kubeadm.go:310] 
	I1205 11:08:02.569603   11137 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 11:08:02.569611   11137 kubeadm.go:310] 
	I1205 11:08:02.569636   11137 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 11:08:02.569640   11137 kubeadm.go:310] 
	I1205 11:08:02.569666   11137 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 11:08:02.569722   11137 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 11:08:02.569779   11137 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 11:08:02.569784   11137 kubeadm.go:310] 
	I1205 11:08:02.569824   11137 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 11:08:02.569876   11137 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 11:08:02.569880   11137 kubeadm.go:310] 
	I1205 11:08:02.569922   11137 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token gxdypa.k5ak3nnpbvxiuq31 \
	I1205 11:08:02.569973   11137 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:88a7dd8c9efc476dd67085474405097045d2c1786f9e8e2a034455d9e105c30a \
	I1205 11:08:02.569985   11137 kubeadm.go:310] 	--control-plane 
	I1205 11:08:02.569987   11137 kubeadm.go:310] 
	I1205 11:08:02.570067   11137 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 11:08:02.570073   11137 kubeadm.go:310] 
	I1205 11:08:02.570120   11137 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token gxdypa.k5ak3nnpbvxiuq31 \
	I1205 11:08:02.570177   11137 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:88a7dd8c9efc476dd67085474405097045d2c1786f9e8e2a034455d9e105c30a 
	I1205 11:08:02.570238   11137 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 11:08:02.570246   11137 cni.go:84] Creating CNI manager for ""
	I1205 11:08:02.570253   11137 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:08:02.576404   11137 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 11:08:02.584438   11137 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 11:08:02.587536   11137 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 11:08:02.592404   11137 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 11:08:02.592461   11137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 11:08:02.592479   11137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-829000 minikube.k8s.io/updated_at=2024_12_05T11_08_02_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331 minikube.k8s.io/name=running-upgrade-829000 minikube.k8s.io/primary=true
	I1205 11:08:02.635264   11137 ops.go:34] apiserver oom_adj: -16
	I1205 11:08:02.635266   11137 kubeadm.go:1113] duration metric: took 42.850708ms to wait for elevateKubeSystemPrivileges
	I1205 11:08:02.635279   11137 kubeadm.go:394] duration metric: took 4m12.864185541s to StartCluster
	I1205 11:08:02.635293   11137 settings.go:142] acquiring lock: {Name:mk685c3b4b58f394644fceb0edca00785ff86d9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:08:02.635475   11137 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 11:08:02.635890   11137 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20052-8600/kubeconfig: {Name:mkb6577356fc2312bf9b329fd967969d2d30b8a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:08:02.636104   11137 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:08:02.636116   11137 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 11:08:02.636154   11137 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-829000"
	I1205 11:08:02.636163   11137 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-829000"
	W1205 11:08:02.636166   11137 addons.go:243] addon storage-provisioner should already be in state true
	I1205 11:08:02.636179   11137 host.go:66] Checking if "running-upgrade-829000" exists ...
	I1205 11:08:02.636206   11137 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-829000"
	I1205 11:08:02.636228   11137 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-829000"
	I1205 11:08:02.636315   11137 config.go:182] Loaded profile config "running-upgrade-829000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1205 11:08:02.637244   11137 kapi.go:59] client config for running-upgrade-829000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/running-upgrade-829000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/running-upgrade-829000/client.key", CAFile:"/Users/jenkins/minikube-integration/20052-8600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102197740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 11:08:02.637690   11137 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-829000"
	W1205 11:08:02.637695   11137 addons.go:243] addon default-storageclass should already be in state true
	I1205 11:08:02.637707   11137 host.go:66] Checking if "running-upgrade-829000" exists ...
	I1205 11:08:02.640457   11137 out.go:177] * Verifying Kubernetes components...
	I1205 11:08:02.640750   11137 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 11:08:02.644890   11137 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 11:08:02.644896   11137 sshutil.go:53] new ssh client: &{IP:localhost Port:51743 SSHKeyPath:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/running-upgrade-829000/id_rsa Username:docker}
	I1205 11:08:02.647403   11137 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 11:08:02.651474   11137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 11:08:02.655443   11137 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 11:08:02.655449   11137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 11:08:02.655455   11137 sshutil.go:53] new ssh client: &{IP:localhost Port:51743 SSHKeyPath:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/running-upgrade-829000/id_rsa Username:docker}
	I1205 11:08:02.740607   11137 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 11:08:02.746417   11137 api_server.go:52] waiting for apiserver process to appear ...
	I1205 11:08:02.746474   11137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 11:08:02.750659   11137 api_server.go:72] duration metric: took 114.545042ms to wait for apiserver process to appear ...
	I1205 11:08:02.750667   11137 api_server.go:88] waiting for apiserver healthz status ...
	I1205 11:08:02.750674   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:08:02.777738   11137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 11:08:02.799889   11137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 11:08:03.104754   11137 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1205 11:08:03.104765   11137 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1205 11:08:07.752729   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:08:07.752782   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:08:12.753070   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:08:12.753111   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:08:17.753535   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:08:17.753560   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:08:22.753999   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:08:22.754058   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:08:27.754859   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:08:27.754884   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:08:32.755726   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:08:32.755763   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1205 11:08:33.106919   11137 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1205 11:08:33.118191   11137 out.go:177] * Enabled addons: storage-provisioner
	I1205 11:08:33.127113   11137 addons.go:510] duration metric: took 30.491318667s for enable addons: enabled=[storage-provisioner]
	I1205 11:08:37.756815   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:08:37.756837   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:08:42.758153   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:08:42.758217   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:08:47.760374   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:08:47.760392   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:08:52.762527   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:08:52.762566   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:08:57.764811   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:08:57.764869   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:09:02.767158   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:09:02.767269   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:09:02.781569   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:09:02.781664   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:09:02.792657   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:09:02.792738   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:09:02.803171   11137 logs.go:282] 2 containers: [bd886f7b8aaf 03eca12adf82]
	I1205 11:09:02.803257   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:09:02.814025   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:09:02.814095   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:09:02.824999   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:09:02.825066   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:09:02.835811   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:09:02.835893   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:09:02.845904   11137 logs.go:282] 0 containers: []
	W1205 11:09:02.845915   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:09:02.845982   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:09:02.860222   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:09:02.860240   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:09:02.860245   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:09:02.885192   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:09:02.885204   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:09:02.899299   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:09:02.899311   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:09:02.911353   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:09:02.911364   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:09:02.928882   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:09:02.928892   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:09:02.940480   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:09:02.940491   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:09:02.952278   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:09:02.952288   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:09:02.972389   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:09:02.972401   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:09:02.984003   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:09:02.984014   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:09:02.995315   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:09:02.995329   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:09:03.031815   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:09:03.031826   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:09:03.036573   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:09:03.036581   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:09:03.071284   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:09:03.071295   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:09:05.588032   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:09:10.590298   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:09:10.590579   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:09:10.612779   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:09:10.612919   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:09:10.632244   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:09:10.632335   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:09:10.644475   11137 logs.go:282] 2 containers: [bd886f7b8aaf 03eca12adf82]
	I1205 11:09:10.644553   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:09:10.655321   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:09:10.655402   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:09:10.666077   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:09:10.666158   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:09:10.676434   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:09:10.676515   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:09:10.686910   11137 logs.go:282] 0 containers: []
	W1205 11:09:10.686920   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:09:10.686982   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:09:10.697793   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:09:10.697810   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:09:10.697816   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:09:10.712011   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:09:10.712024   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:09:10.723381   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:09:10.723392   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:09:10.748709   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:09:10.748715   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:09:10.783662   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:09:10.783674   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:09:10.802703   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:09:10.802714   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:09:10.814859   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:09:10.814870   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:09:10.827642   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:09:10.827658   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:09:10.840008   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:09:10.840017   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:09:10.857396   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:09:10.857406   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:09:10.869719   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:09:10.869735   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:09:10.904909   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:09:10.904919   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:09:10.909621   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:09:10.909632   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:09:13.426071   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:09:18.428418   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:09:18.428652   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:09:18.451945   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:09:18.452053   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:09:18.466876   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:09:18.466954   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:09:18.479642   11137 logs.go:282] 2 containers: [bd886f7b8aaf 03eca12adf82]
	I1205 11:09:18.479709   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:09:18.490046   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:09:18.490128   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:09:18.500692   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:09:18.500774   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:09:18.514292   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:09:18.514374   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:09:18.524604   11137 logs.go:282] 0 containers: []
	W1205 11:09:18.524617   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:09:18.524686   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:09:18.535425   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:09:18.535446   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:09:18.535451   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:09:18.549866   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:09:18.549879   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:09:18.562052   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:09:18.562062   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:09:18.582612   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:09:18.582623   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:09:18.616809   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:09:18.616821   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:09:18.650291   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:09:18.650304   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:09:18.664513   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:09:18.664523   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:09:18.680905   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:09:18.680918   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:09:18.692596   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:09:18.692608   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:09:18.717446   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:09:18.717453   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:09:18.721624   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:09:18.721633   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:09:18.735217   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:09:18.735228   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:09:18.746828   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:09:18.746838   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:09:21.312877   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:09:26.315165   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:09:26.315402   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:09:26.335428   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:09:26.335528   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:09:26.356186   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:09:26.356273   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:09:26.367749   11137 logs.go:282] 2 containers: [bd886f7b8aaf 03eca12adf82]
	I1205 11:09:26.367826   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:09:26.378241   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:09:26.378317   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:09:26.388478   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:09:26.388565   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:09:26.400967   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:09:26.401044   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:09:26.411526   11137 logs.go:282] 0 containers: []
	W1205 11:09:26.411537   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:09:26.411598   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:09:26.422031   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:09:26.422046   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:09:26.422052   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:09:26.436289   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:09:26.436302   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:09:26.448104   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:09:26.448115   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:09:26.466351   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:09:26.466364   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:09:26.484162   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:09:26.484176   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:09:26.496934   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:09:26.496945   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:09:26.501367   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:09:26.501376   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:09:26.535571   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:09:26.535585   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:09:26.549624   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:09:26.549638   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:09:26.561650   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:09:26.561664   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:09:26.585700   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:09:26.585711   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:09:26.618203   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:09:26.618211   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:09:26.629948   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:09:26.629961   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:09:29.146259   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:09:34.148673   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:09:34.148960   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:09:34.172800   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:09:34.172941   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:09:34.189959   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:09:34.190050   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:09:34.203127   11137 logs.go:282] 2 containers: [bd886f7b8aaf 03eca12adf82]
	I1205 11:09:34.203213   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:09:34.214566   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:09:34.214643   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:09:34.225024   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:09:34.225102   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:09:34.235887   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:09:34.235960   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:09:34.246029   11137 logs.go:282] 0 containers: []
	W1205 11:09:34.246042   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:09:34.246111   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:09:34.256210   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:09:34.256226   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:09:34.256232   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:09:34.270817   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:09:34.270831   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:09:34.282706   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:09:34.282719   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:09:34.294572   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:09:34.294582   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:09:34.310147   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:09:34.310157   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:09:34.314722   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:09:34.314732   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:09:34.357147   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:09:34.357161   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:09:34.372447   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:09:34.372460   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:09:34.384259   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:09:34.384270   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:09:34.399484   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:09:34.399495   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:09:34.417315   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:09:34.417328   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:09:34.428432   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:09:34.428443   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:09:34.463932   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:09:34.463941   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:09:36.990263   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:09:41.993019   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:09:41.993601   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:09:42.036515   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:09:42.036683   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:09:42.058171   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:09:42.058280   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:09:42.072758   11137 logs.go:282] 2 containers: [bd886f7b8aaf 03eca12adf82]
	I1205 11:09:42.072853   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:09:42.084855   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:09:42.084947   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:09:42.095790   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:09:42.095876   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:09:42.107366   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:09:42.107445   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:09:42.118295   11137 logs.go:282] 0 containers: []
	W1205 11:09:42.118307   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:09:42.118378   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:09:42.129345   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:09:42.129361   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:09:42.129367   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:09:42.153194   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:09:42.153202   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:09:42.165028   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:09:42.165041   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:09:42.177015   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:09:42.177024   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:09:42.188509   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:09:42.188520   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:09:42.200185   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:09:42.200197   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:09:42.214311   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:09:42.214323   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:09:42.232566   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:09:42.232578   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:09:42.244638   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:09:42.244649   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:09:42.259407   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:09:42.259416   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:09:42.276834   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:09:42.276844   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:09:42.310295   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:09:42.310305   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:09:42.314721   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:09:42.314729   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:09:44.856680   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:09:49.858966   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:09:49.859163   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:09:49.872860   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:09:49.872958   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:09:49.884376   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:09:49.884458   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:09:49.895411   11137 logs.go:282] 2 containers: [bd886f7b8aaf 03eca12adf82]
	I1205 11:09:49.895487   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:09:49.905600   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:09:49.905677   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:09:49.915760   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:09:49.915836   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:09:49.926420   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:09:49.926488   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:09:49.936585   11137 logs.go:282] 0 containers: []
	W1205 11:09:49.936597   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:09:49.936661   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:09:49.947200   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:09:49.947216   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:09:49.947222   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:09:49.982455   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:09:49.982463   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:09:49.986837   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:09:49.986846   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:09:50.021405   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:09:50.021416   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:09:50.033154   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:09:50.033165   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:09:50.058388   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:09:50.058403   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:09:50.094813   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:09:50.094829   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:09:50.108327   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:09:50.108339   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:09:50.121739   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:09:50.121751   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:09:50.136363   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:09:50.136374   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:09:50.150898   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:09:50.150909   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:09:50.162602   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:09:50.162617   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:09:50.177386   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:09:50.177396   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:09:52.691419   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:09:57.693708   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:09:57.694002   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:09:57.719166   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:09:57.719280   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:09:57.735586   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:09:57.735695   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:09:57.749076   11137 logs.go:282] 2 containers: [bd886f7b8aaf 03eca12adf82]
	I1205 11:09:57.749163   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:09:57.760022   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:09:57.760101   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:09:57.770432   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:09:57.770520   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:09:57.780645   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:09:57.780717   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:09:57.794922   11137 logs.go:282] 0 containers: []
	W1205 11:09:57.794935   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:09:57.795003   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:09:57.804971   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:09:57.804987   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:09:57.804993   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:09:57.841864   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:09:57.841876   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:09:57.854669   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:09:57.854683   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:09:57.869342   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:09:57.869356   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:09:57.887116   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:09:57.887126   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:09:57.921870   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:09:57.921879   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:09:57.926437   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:09:57.926446   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:09:57.940551   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:09:57.940562   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:09:57.954679   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:09:57.954692   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:09:57.966321   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:09:57.966334   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:09:57.982020   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:09:57.982030   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:09:57.993373   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:09:57.993384   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:09:58.018513   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:09:58.018524   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:10:00.531883   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:10:05.534112   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:10:05.534322   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:10:05.547627   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:10:05.547716   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:10:05.558921   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:10:05.559003   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:10:05.569971   11137 logs.go:282] 2 containers: [bd886f7b8aaf 03eca12adf82]
	I1205 11:10:05.570058   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:10:05.580099   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:10:05.580175   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:10:05.590853   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:10:05.590936   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:10:05.601598   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:10:05.601669   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:10:05.614648   11137 logs.go:282] 0 containers: []
	W1205 11:10:05.614661   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:10:05.614728   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:10:05.629358   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:10:05.629373   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:10:05.629380   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:10:05.641051   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:10:05.641065   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:10:05.664640   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:10:05.664649   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:10:05.676961   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:10:05.676975   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:10:05.681544   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:10:05.681550   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:10:05.701904   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:10:05.701918   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:10:05.716762   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:10:05.716777   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:10:05.729041   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:10:05.729052   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:10:05.744394   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:10:05.744404   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:10:05.756206   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:10:05.756216   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:10:05.774116   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:10:05.774125   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:10:05.788153   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:10:05.788166   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:10:05.821551   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:10:05.821559   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:10:08.358610   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:10:13.361082   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:10:13.361422   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:10:13.388671   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:10:13.388813   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:10:13.409888   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:10:13.409993   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:10:13.422359   11137 logs.go:282] 2 containers: [bd886f7b8aaf 03eca12adf82]
	I1205 11:10:13.422445   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:10:13.437222   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:10:13.437295   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:10:13.448465   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:10:13.448541   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:10:13.464166   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:10:13.464247   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:10:13.475471   11137 logs.go:282] 0 containers: []
	W1205 11:10:13.475481   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:10:13.475539   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:10:13.485894   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:10:13.485910   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:10:13.485915   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:10:13.497806   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:10:13.497816   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:10:13.520978   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:10:13.520986   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:10:13.532070   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:10:13.532086   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:10:13.536929   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:10:13.536935   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:10:13.570908   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:10:13.570928   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:10:13.585240   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:10:13.585255   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:10:13.598942   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:10:13.598952   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:10:13.612487   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:10:13.612500   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:10:13.626951   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:10:13.626963   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:10:13.638107   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:10:13.638117   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:10:13.655388   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:10:13.655398   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:10:13.690482   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:10:13.690492   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:10:16.204121   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:10:21.206801   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:10:21.207234   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:10:21.234853   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:10:21.235002   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:10:21.252722   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:10:21.252818   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:10:21.267120   11137 logs.go:282] 4 containers: [a9825cec9ee3 995fc5f291bc bd886f7b8aaf 03eca12adf82]
	I1205 11:10:21.267215   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:10:21.279813   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:10:21.279900   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:10:21.290216   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:10:21.290305   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:10:21.307001   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:10:21.307079   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:10:21.316870   11137 logs.go:282] 0 containers: []
	W1205 11:10:21.316883   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:10:21.316951   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:10:21.328204   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:10:21.328223   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:10:21.328229   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:10:21.333397   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:10:21.333404   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:10:21.347527   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:10:21.347538   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:10:21.359230   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:10:21.359241   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:10:21.371265   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:10:21.371276   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:10:21.396511   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:10:21.396522   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:10:21.431067   11137 logs.go:123] Gathering logs for coredns [995fc5f291bc] ...
	I1205 11:10:21.431078   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995fc5f291bc"
	I1205 11:10:21.446622   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:10:21.446634   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:10:21.463161   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:10:21.463175   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:10:21.488694   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:10:21.488705   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:10:21.501633   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:10:21.501644   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:10:21.513610   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:10:21.513622   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:10:21.548863   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:10:21.548875   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:10:21.563899   11137 logs.go:123] Gathering logs for coredns [a9825cec9ee3] ...
	I1205 11:10:21.563910   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9825cec9ee3"
	I1205 11:10:21.575965   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:10:21.575978   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:10:24.090561   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:10:29.091434   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:10:29.091910   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:10:29.123943   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:10:29.124102   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:10:29.143516   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:10:29.143627   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:10:29.158463   11137 logs.go:282] 4 containers: [a9825cec9ee3 995fc5f291bc bd886f7b8aaf 03eca12adf82]
	I1205 11:10:29.158557   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:10:29.170880   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:10:29.170968   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:10:29.182431   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:10:29.182509   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:10:29.193133   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:10:29.193201   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:10:29.203481   11137 logs.go:282] 0 containers: []
	W1205 11:10:29.203492   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:10:29.203559   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:10:29.219565   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:10:29.219581   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:10:29.219587   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:10:29.252804   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:10:29.252812   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:10:29.267742   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:10:29.267755   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:10:29.282807   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:10:29.282817   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:10:29.298532   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:10:29.298543   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:10:29.310926   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:10:29.310936   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:10:29.332622   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:10:29.332635   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:10:29.357697   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:10:29.357711   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:10:29.421437   11137 logs.go:123] Gathering logs for coredns [a9825cec9ee3] ...
	I1205 11:10:29.421449   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9825cec9ee3"
	I1205 11:10:29.433643   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:10:29.433655   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:10:29.445791   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:10:29.445802   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:10:29.467501   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:10:29.467511   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:10:29.479034   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:10:29.479048   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:10:29.491760   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:10:29.491773   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:10:29.496464   11137 logs.go:123] Gathering logs for coredns [995fc5f291bc] ...
	I1205 11:10:29.496475   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995fc5f291bc"
	I1205 11:10:32.010802   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:10:37.013249   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:10:37.013781   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:10:37.053569   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:10:37.053731   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:10:37.075995   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:10:37.076121   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:10:37.091884   11137 logs.go:282] 4 containers: [a9825cec9ee3 995fc5f291bc bd886f7b8aaf 03eca12adf82]
	I1205 11:10:37.091974   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:10:37.105180   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:10:37.105264   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:10:37.117037   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:10:37.117119   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:10:37.127989   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:10:37.128064   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:10:37.138748   11137 logs.go:282] 0 containers: []
	W1205 11:10:37.138759   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:10:37.138830   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:10:37.149354   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:10:37.149373   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:10:37.149379   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:10:37.167253   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:10:37.167264   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:10:37.182672   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:10:37.182683   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:10:37.194334   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:10:37.194344   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:10:37.208409   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:10:37.208420   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:10:37.232780   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:10:37.232788   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:10:37.267000   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:10:37.267011   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:10:37.281412   11137 logs.go:123] Gathering logs for coredns [995fc5f291bc] ...
	I1205 11:10:37.281425   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995fc5f291bc"
	I1205 11:10:37.292995   11137 logs.go:123] Gathering logs for coredns [a9825cec9ee3] ...
	I1205 11:10:37.293007   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9825cec9ee3"
	I1205 11:10:37.304575   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:10:37.304587   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:10:37.316168   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:10:37.316179   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:10:37.350357   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:10:37.350365   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:10:37.354544   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:10:37.354553   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:10:37.369097   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:10:37.369109   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:10:37.385109   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:10:37.385119   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:10:39.903101   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:10:44.905501   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:10:44.905725   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:10:44.919227   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:10:44.919321   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:10:44.931087   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:10:44.931166   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:10:44.942042   11137 logs.go:282] 4 containers: [a9825cec9ee3 995fc5f291bc bd886f7b8aaf 03eca12adf82]
	I1205 11:10:44.942124   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:10:44.954667   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:10:44.954743   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:10:44.969447   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:10:44.969527   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:10:44.979875   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:10:44.979946   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:10:44.990403   11137 logs.go:282] 0 containers: []
	W1205 11:10:44.990415   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:10:44.990484   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:10:45.002642   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:10:45.002659   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:10:45.002665   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:10:45.037126   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:10:45.037135   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:10:45.051533   11137 logs.go:123] Gathering logs for coredns [a9825cec9ee3] ...
	I1205 11:10:45.051546   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9825cec9ee3"
	I1205 11:10:45.064464   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:10:45.064477   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:10:45.078910   11137 logs.go:123] Gathering logs for coredns [995fc5f291bc] ...
	I1205 11:10:45.078922   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995fc5f291bc"
	I1205 11:10:45.092405   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:10:45.092417   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:10:45.107241   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:10:45.107251   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:10:45.126158   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:10:45.126171   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:10:45.130584   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:10:45.130591   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:10:45.165097   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:10:45.165109   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:10:45.190434   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:10:45.190444   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:10:45.201674   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:10:45.201685   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:10:45.213763   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:10:45.213776   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:10:45.230408   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:10:45.230418   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:10:45.243372   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:10:45.243435   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:10:47.758397   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:10:52.760707   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:10:52.760844   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:10:52.778856   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:10:52.778939   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:10:52.788798   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:10:52.788881   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:10:52.800862   11137 logs.go:282] 4 containers: [a9825cec9ee3 995fc5f291bc bd886f7b8aaf 03eca12adf82]
	I1205 11:10:52.800944   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:10:52.815831   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:10:52.815912   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:10:52.826836   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:10:52.826921   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:10:52.837376   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:10:52.837451   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:10:52.848073   11137 logs.go:282] 0 containers: []
	W1205 11:10:52.848084   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:10:52.848152   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:10:52.859214   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:10:52.859231   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:10:52.859237   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:10:52.870767   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:10:52.870781   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:10:52.882708   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:10:52.882719   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:10:52.900817   11137 logs.go:123] Gathering logs for coredns [995fc5f291bc] ...
	I1205 11:10:52.900827   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995fc5f291bc"
	I1205 11:10:52.912858   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:10:52.912869   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:10:52.924819   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:10:52.924829   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:10:52.960434   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:10:52.960443   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:10:52.965433   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:10:52.965438   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:10:53.001879   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:10:53.001893   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:10:53.016789   11137 logs.go:123] Gathering logs for coredns [a9825cec9ee3] ...
	I1205 11:10:53.016801   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9825cec9ee3"
	I1205 11:10:53.028747   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:10:53.028759   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:10:53.040211   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:10:53.040223   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:10:53.054431   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:10:53.054441   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:10:53.069425   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:10:53.069435   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:10:53.094523   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:10:53.094531   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:10:55.608052   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:11:00.610359   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:11:00.610498   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:11:00.623131   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:11:00.623220   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:11:00.634731   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:11:00.634810   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:11:00.645322   11137 logs.go:282] 4 containers: [a9825cec9ee3 995fc5f291bc bd886f7b8aaf 03eca12adf82]
	I1205 11:11:00.645409   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:11:00.656141   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:11:00.656220   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:11:00.666909   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:11:00.666987   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:11:00.676979   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:11:00.677053   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:11:00.687753   11137 logs.go:282] 0 containers: []
	W1205 11:11:00.687764   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:11:00.687832   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:11:00.697939   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:11:00.697958   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:11:00.697965   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:11:00.733698   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:11:00.733710   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:11:00.751298   11137 logs.go:123] Gathering logs for coredns [a9825cec9ee3] ...
	I1205 11:11:00.751309   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9825cec9ee3"
	I1205 11:11:00.763314   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:11:00.763325   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:11:00.780071   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:11:00.780081   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:11:00.791798   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:11:00.791809   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:11:00.806564   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:11:00.806578   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:11:00.818424   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:11:00.818434   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:11:00.829729   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:11:00.829740   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:11:00.834410   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:11:00.834417   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:11:00.871570   11137 logs.go:123] Gathering logs for coredns [995fc5f291bc] ...
	I1205 11:11:00.871581   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995fc5f291bc"
	I1205 11:11:00.887645   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:11:00.887657   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:11:00.899188   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:11:00.899203   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:11:00.911853   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:11:00.911867   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:11:00.937025   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:11:00.937034   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:11:03.457895   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:11:08.460313   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:11:08.460593   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:11:08.483279   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:11:08.483413   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:11:08.501323   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:11:08.501418   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:11:08.514375   11137 logs.go:282] 4 containers: [a9825cec9ee3 995fc5f291bc bd886f7b8aaf 03eca12adf82]
	I1205 11:11:08.514462   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:11:08.525467   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:11:08.525546   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:11:08.535830   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:11:08.535912   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:11:08.546389   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:11:08.546460   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:11:08.556907   11137 logs.go:282] 0 containers: []
	W1205 11:11:08.556919   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:11:08.556993   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:11:08.567870   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:11:08.567888   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:11:08.567894   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:11:08.573441   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:11:08.573452   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:11:08.588408   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:11:08.588418   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:11:08.603024   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:11:08.603036   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:11:08.620606   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:11:08.620620   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:11:08.632532   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:11:08.632545   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:11:08.657780   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:11:08.657788   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:11:08.690157   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:11:08.690164   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:11:08.706846   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:11:08.706859   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:11:08.719108   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:11:08.719119   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:11:08.730389   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:11:08.730401   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:11:08.744626   11137 logs.go:123] Gathering logs for coredns [995fc5f291bc] ...
	I1205 11:11:08.744640   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995fc5f291bc"
	I1205 11:11:08.756243   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:11:08.756257   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:11:08.768043   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:11:08.768056   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:11:08.803717   11137 logs.go:123] Gathering logs for coredns [a9825cec9ee3] ...
	I1205 11:11:08.803732   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9825cec9ee3"
	I1205 11:11:11.317945   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:11:16.320593   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:11:16.320842   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:11:16.334809   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:11:16.334903   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:11:16.347688   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:11:16.347766   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:11:16.358386   11137 logs.go:282] 4 containers: [a9825cec9ee3 995fc5f291bc bd886f7b8aaf 03eca12adf82]
	I1205 11:11:16.358458   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:11:16.369492   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:11:16.369566   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:11:16.380411   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:11:16.380485   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:11:16.391393   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:11:16.391475   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:11:16.403165   11137 logs.go:282] 0 containers: []
	W1205 11:11:16.403176   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:11:16.403243   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:11:16.414013   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:11:16.414030   11137 logs.go:123] Gathering logs for coredns [a9825cec9ee3] ...
	I1205 11:11:16.414036   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9825cec9ee3"
	I1205 11:11:16.425883   11137 logs.go:123] Gathering logs for coredns [995fc5f291bc] ...
	I1205 11:11:16.425895   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995fc5f291bc"
	I1205 11:11:16.438570   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:11:16.438580   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:11:16.450259   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:11:16.450269   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:11:16.462188   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:11:16.462198   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:11:16.497936   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:11:16.497949   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:11:16.502558   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:11:16.502568   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:11:16.514803   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:11:16.514817   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:11:16.529414   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:11:16.529424   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:11:16.553246   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:11:16.553259   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:11:16.586928   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:11:16.586938   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:11:16.602391   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:11:16.602404   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:11:16.614075   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:11:16.614087   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:11:16.629645   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:11:16.629655   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:11:16.647126   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:11:16.647137   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:11:19.166749   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:11:24.169018   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:11:24.169187   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:11:24.180701   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:11:24.180779   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:11:24.192307   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:11:24.192391   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:11:24.203252   11137 logs.go:282] 4 containers: [a9825cec9ee3 995fc5f291bc bd886f7b8aaf 03eca12adf82]
	I1205 11:11:24.203336   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:11:24.214357   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:11:24.214437   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:11:24.225760   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:11:24.225836   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:11:24.237277   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:11:24.237358   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:11:24.247597   11137 logs.go:282] 0 containers: []
	W1205 11:11:24.247609   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:11:24.247676   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:11:24.258021   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:11:24.258040   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:11:24.258046   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:11:24.273227   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:11:24.273237   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:11:24.284823   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:11:24.284834   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:11:24.296849   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:11:24.296862   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:11:24.320733   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:11:24.320740   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:11:24.354931   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:11:24.354941   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:11:24.359344   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:11:24.359351   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:11:24.370968   11137 logs.go:123] Gathering logs for coredns [995fc5f291bc] ...
	I1205 11:11:24.370979   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995fc5f291bc"
	I1205 11:11:24.382764   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:11:24.382775   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:11:24.398835   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:11:24.398851   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:11:24.417132   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:11:24.417147   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:11:24.452157   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:11:24.452168   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:11:24.466193   11137 logs.go:123] Gathering logs for coredns [a9825cec9ee3] ...
	I1205 11:11:24.466202   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9825cec9ee3"
	I1205 11:11:24.478415   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:11:24.478426   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:11:24.497452   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:11:24.497463   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:11:27.011132   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:11:32.012837   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:11:32.013068   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:11:32.029862   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:11:32.029964   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:11:32.044953   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:11:32.045036   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:11:32.056844   11137 logs.go:282] 4 containers: [a9825cec9ee3 995fc5f291bc bd886f7b8aaf 03eca12adf82]
	I1205 11:11:32.056924   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:11:32.067966   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:11:32.068053   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:11:32.078465   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:11:32.078547   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:11:32.089249   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:11:32.089331   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:11:32.100102   11137 logs.go:282] 0 containers: []
	W1205 11:11:32.100114   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:11:32.100190   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:11:32.113814   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:11:32.113833   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:11:32.113838   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:11:32.147429   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:11:32.147441   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:11:32.167670   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:11:32.167682   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:11:32.182258   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:11:32.182269   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:11:32.193833   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:11:32.193847   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:11:32.208567   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:11:32.208580   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:11:32.230361   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:11:32.230375   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:11:32.242712   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:11:32.242723   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:11:32.257184   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:11:32.257198   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:11:32.261668   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:11:32.261677   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:11:32.299942   11137 logs.go:123] Gathering logs for coredns [995fc5f291bc] ...
	I1205 11:11:32.299954   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995fc5f291bc"
	I1205 11:11:32.312143   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:11:32.312156   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:11:32.323550   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:11:32.323561   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:11:32.335380   11137 logs.go:123] Gathering logs for coredns [a9825cec9ee3] ...
	I1205 11:11:32.335393   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9825cec9ee3"
	I1205 11:11:32.347326   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:11:32.347339   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:11:34.873462   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:11:39.875671   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:11:39.875790   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:11:39.887112   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:11:39.887199   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:11:39.897488   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:11:39.897562   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:11:39.909902   11137 logs.go:282] 4 containers: [a9825cec9ee3 995fc5f291bc bd886f7b8aaf 03eca12adf82]
	I1205 11:11:39.909986   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:11:39.924546   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:11:39.924655   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:11:39.936976   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:11:39.937059   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:11:39.949513   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:11:39.949604   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:11:39.961475   11137 logs.go:282] 0 containers: []
	W1205 11:11:39.961487   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:11:39.961560   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:11:39.972624   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:11:39.972642   11137 logs.go:123] Gathering logs for coredns [a9825cec9ee3] ...
	I1205 11:11:39.972648   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9825cec9ee3"
	I1205 11:11:39.986089   11137 logs.go:123] Gathering logs for coredns [995fc5f291bc] ...
	I1205 11:11:39.986101   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995fc5f291bc"
	I1205 11:11:39.999626   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:11:39.999637   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:11:40.011494   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:11:40.011504   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:11:40.025029   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:11:40.025042   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:11:40.062542   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:11:40.062563   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:11:40.101258   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:11:40.101272   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:11:40.116667   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:11:40.116683   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:11:40.137835   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:11:40.137850   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:11:40.158152   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:11:40.158164   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:11:40.170886   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:11:40.170899   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:11:40.196088   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:11:40.196108   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:11:40.208572   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:11:40.208585   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:11:40.224212   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:11:40.224226   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:11:40.237886   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:11:40.237900   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:11:42.744920   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:11:47.747175   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:11:47.747446   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:11:47.769642   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:11:47.769744   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:11:47.783713   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:11:47.783807   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:11:47.797443   11137 logs.go:282] 4 containers: [a9825cec9ee3 995fc5f291bc bd886f7b8aaf 03eca12adf82]
	I1205 11:11:47.797529   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:11:47.812935   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:11:47.813006   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:11:47.823347   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:11:47.823425   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:11:47.833800   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:11:47.833870   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:11:47.844462   11137 logs.go:282] 0 containers: []
	W1205 11:11:47.844478   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:11:47.844549   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:11:47.854771   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:11:47.854788   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:11:47.854794   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:11:47.887764   11137 logs.go:123] Gathering logs for coredns [a9825cec9ee3] ...
	I1205 11:11:47.887772   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9825cec9ee3"
	I1205 11:11:47.903520   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:11:47.903532   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:11:47.915307   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:11:47.915320   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:11:47.919908   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:11:47.919917   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:11:47.932120   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:11:47.932134   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:11:47.957002   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:11:47.957010   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:11:47.971216   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:11:47.971226   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:11:47.983301   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:11:47.983312   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:11:47.995302   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:11:47.995316   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:11:48.014226   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:11:48.014238   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:11:48.025700   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:11:48.025714   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:11:48.060806   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:11:48.060817   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:11:48.075273   11137 logs.go:123] Gathering logs for coredns [995fc5f291bc] ...
	I1205 11:11:48.075284   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995fc5f291bc"
	I1205 11:11:48.087005   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:11:48.087017   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:11:50.607574   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:11:55.609928   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:11:55.610198   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:11:55.633279   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:11:55.633407   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:11:55.649820   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:11:55.649915   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:11:55.663420   11137 logs.go:282] 4 containers: [a9825cec9ee3 995fc5f291bc bd886f7b8aaf 03eca12adf82]
	I1205 11:11:55.663503   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:11:55.674654   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:11:55.674739   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:11:55.685296   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:11:55.685382   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:11:55.696721   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:11:55.696803   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:11:55.707654   11137 logs.go:282] 0 containers: []
	W1205 11:11:55.707670   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:11:55.707740   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:11:55.719503   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:11:55.719519   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:11:55.719525   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:11:55.733796   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:11:55.733808   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:11:55.745691   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:11:55.745702   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:11:55.766654   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:11:55.766666   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:11:55.801098   11137 logs.go:123] Gathering logs for coredns [995fc5f291bc] ...
	I1205 11:11:55.801108   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995fc5f291bc"
	I1205 11:11:55.812827   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:11:55.812840   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:11:55.825394   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:11:55.825407   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:11:55.837678   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:11:55.837690   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:11:55.855432   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:11:55.855442   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:11:55.867238   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:11:55.867249   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:11:55.889564   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:11:55.889573   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:11:55.894104   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:11:55.894113   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:11:55.929418   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:11:55.929432   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:11:55.944629   11137 logs.go:123] Gathering logs for coredns [a9825cec9ee3] ...
	I1205 11:11:55.944643   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9825cec9ee3"
	I1205 11:11:55.956939   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:11:55.956952   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:11:58.471522   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:12:03.472396   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:12:03.477111   11137 out.go:201] 
	W1205 11:12:03.481137   11137 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1205 11:12:03.481151   11137 out.go:270] * 
	* 
	W1205 11:12:03.482255   11137 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:12:03.493538   11137 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-829000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-12-05 11:12:03.598924 -0800 PST m=+1328.244044043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-829000 -n running-upgrade-829000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-829000 -n running-upgrade-829000: exit status 2 (15.626916708s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-829000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-527000          | force-systemd-flag-527000 | jenkins | v1.34.0 | 05 Dec 24 11:02 PST |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-497000              | force-systemd-env-497000  | jenkins | v1.34.0 | 05 Dec 24 11:02 PST |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-497000           | force-systemd-env-497000  | jenkins | v1.34.0 | 05 Dec 24 11:02 PST | 05 Dec 24 11:02 PST |
	| start   | -p docker-flags-345000                | docker-flags-345000       | jenkins | v1.34.0 | 05 Dec 24 11:02 PST |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-527000             | force-systemd-flag-527000 | jenkins | v1.34.0 | 05 Dec 24 11:02 PST |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-527000          | force-systemd-flag-527000 | jenkins | v1.34.0 | 05 Dec 24 11:02 PST | 05 Dec 24 11:02 PST |
	| start   | -p cert-expiration-404000             | cert-expiration-404000    | jenkins | v1.34.0 | 05 Dec 24 11:02 PST |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-345000 ssh               | docker-flags-345000       | jenkins | v1.34.0 | 05 Dec 24 11:02 PST |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-345000 ssh               | docker-flags-345000       | jenkins | v1.34.0 | 05 Dec 24 11:02 PST |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-345000                | docker-flags-345000       | jenkins | v1.34.0 | 05 Dec 24 11:02 PST | 05 Dec 24 11:02 PST |
	| start   | -p cert-options-748000                | cert-options-748000       | jenkins | v1.34.0 | 05 Dec 24 11:02 PST |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-748000 ssh               | cert-options-748000       | jenkins | v1.34.0 | 05 Dec 24 11:02 PST |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-748000 -- sudo        | cert-options-748000       | jenkins | v1.34.0 | 05 Dec 24 11:02 PST |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-748000                | cert-options-748000       | jenkins | v1.34.0 | 05 Dec 24 11:02 PST | 05 Dec 24 11:02 PST |
	| start   | -p running-upgrade-829000             | minikube                  | jenkins | v1.26.0 | 05 Dec 24 11:02 PST | 05 Dec 24 11:03 PST |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-829000             | running-upgrade-829000    | jenkins | v1.34.0 | 05 Dec 24 11:03 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-404000             | cert-expiration-404000    | jenkins | v1.34.0 | 05 Dec 24 11:05 PST |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-404000             | cert-expiration-404000    | jenkins | v1.34.0 | 05 Dec 24 11:05 PST | 05 Dec 24 11:05 PST |
	| start   | -p kubernetes-upgrade-763000          | kubernetes-upgrade-763000 | jenkins | v1.34.0 | 05 Dec 24 11:05 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-763000          | kubernetes-upgrade-763000 | jenkins | v1.34.0 | 05 Dec 24 11:05 PST | 05 Dec 24 11:05 PST |
	| start   | -p kubernetes-upgrade-763000          | kubernetes-upgrade-763000 | jenkins | v1.34.0 | 05 Dec 24 11:05 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-763000          | kubernetes-upgrade-763000 | jenkins | v1.34.0 | 05 Dec 24 11:05 PST | 05 Dec 24 11:05 PST |
	| start   | -p stopped-upgrade-616000             | minikube                  | jenkins | v1.26.0 | 05 Dec 24 11:05 PST | 05 Dec 24 11:06 PST |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-616000 stop           | minikube                  | jenkins | v1.26.0 | 05 Dec 24 11:06 PST | 05 Dec 24 11:06 PST |
	| start   | -p stopped-upgrade-616000             | stopped-upgrade-616000    | jenkins | v1.34.0 | 05 Dec 24 11:06 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 11:06:45
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.2 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 11:06:45.807067   11277 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:06:45.807233   11277 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:06:45.807236   11277 out.go:358] Setting ErrFile to fd 2...
	I1205 11:06:45.807239   11277 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:06:45.807371   11277 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:06:45.808569   11277 out.go:352] Setting JSON to false
	I1205 11:06:45.828571   11277 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5777,"bootTime":1733419828,"procs":549,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 11:06:45.828643   11277 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:06:45.832761   11277 out.go:177] * [stopped-upgrade-616000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:06:45.840834   11277 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 11:06:45.840884   11277 notify.go:220] Checking for updates...
	I1205 11:06:45.848827   11277 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 11:06:45.852729   11277 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:06:45.855736   11277 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:06:45.858775   11277 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	I1205 11:06:45.861741   11277 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:06:45.865085   11277 config.go:182] Loaded profile config "stopped-upgrade-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1205 11:06:45.868775   11277 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1205 11:06:45.871752   11277 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:06:45.875794   11277 out.go:177] * Using the qemu2 driver based on existing profile
	I1205 11:06:45.881811   11277 start.go:297] selected driver: qemu2
	I1205 11:06:45.881819   11277 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-616000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52022 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-616000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1205 11:06:45.881878   11277 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:06:45.884743   11277 cni.go:84] Creating CNI manager for ""
	I1205 11:06:45.884774   11277 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:06:45.884802   11277 start.go:340] cluster config:
	{Name:stopped-upgrade-616000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52022 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-616000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1205 11:06:45.884857   11277 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:06:45.893761   11277 out.go:177] * Starting "stopped-upgrade-616000" primary control-plane node in "stopped-upgrade-616000" cluster
	I1205 11:06:45.897759   11277 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1205 11:06:45.897774   11277 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1205 11:06:45.897781   11277 cache.go:56] Caching tarball of preloaded images
	I1205 11:06:45.897853   11277 preload.go:172] Found /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:06:45.897858   11277 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1205 11:06:45.897922   11277 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/config.json ...
	I1205 11:06:45.898405   11277 start.go:360] acquireMachinesLock for stopped-upgrade-616000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:06:45.898449   11277 start.go:364] duration metric: took 38.584µs to acquireMachinesLock for "stopped-upgrade-616000"
	I1205 11:06:45.898457   11277 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:06:45.898461   11277 fix.go:54] fixHost starting: 
	I1205 11:06:45.898566   11277 fix.go:112] recreateIfNeeded on stopped-upgrade-616000: state=Stopped err=<nil>
	W1205 11:06:45.898572   11277 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:06:45.906765   11277 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-616000" ...
	I1205 11:06:45.937255   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:06:45.910800   11277 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:06:45.910867   11277 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/stopped-upgrade-616000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/stopped-upgrade-616000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/stopped-upgrade-616000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51987-:22,hostfwd=tcp::51988-:2376,hostname=stopped-upgrade-616000 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/stopped-upgrade-616000/disk.qcow2
	I1205 11:06:45.957247   11277 main.go:141] libmachine: STDOUT: 
	I1205 11:06:45.957286   11277 main.go:141] libmachine: STDERR: 
	I1205 11:06:45.957294   11277 main.go:141] libmachine: Waiting for VM to start (ssh -p 51987 docker@127.0.0.1)...
	I1205 11:06:50.938484   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:06:50.938667   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:06:50.956476   11137 logs.go:282] 2 containers: [262c3ed215cb b4ab67c9a319]
	I1205 11:06:50.956556   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:06:50.967060   11137 logs.go:282] 2 containers: [958667a8aed1 d83e8d46af5a]
	I1205 11:06:50.967131   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:06:50.977607   11137 logs.go:282] 1 containers: [8a5ec0469f49]
	I1205 11:06:50.977684   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:06:50.988102   11137 logs.go:282] 2 containers: [40c6e6634eaf fb5ba8ab7ba0]
	I1205 11:06:50.988175   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:06:50.998672   11137 logs.go:282] 1 containers: [62422ea4292b]
	I1205 11:06:50.998754   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:06:51.009262   11137 logs.go:282] 2 containers: [ee1f0118663e 6e40c464d81d]
	I1205 11:06:51.009337   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:06:51.019682   11137 logs.go:282] 0 containers: []
	W1205 11:06:51.019693   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:06:51.019757   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:06:51.030614   11137 logs.go:282] 2 containers: [e5971fc04e44 904e682b96d7]
	I1205 11:06:51.030633   11137 logs.go:123] Gathering logs for etcd [958667a8aed1] ...
	I1205 11:06:51.030642   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 958667a8aed1"
	I1205 11:06:51.048359   11137 logs.go:123] Gathering logs for etcd [d83e8d46af5a] ...
	I1205 11:06:51.048370   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d83e8d46af5a"
	I1205 11:06:51.069907   11137 logs.go:123] Gathering logs for coredns [8a5ec0469f49] ...
	I1205 11:06:51.069918   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a5ec0469f49"
	I1205 11:06:51.080889   11137 logs.go:123] Gathering logs for storage-provisioner [e5971fc04e44] ...
	I1205 11:06:51.080899   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5971fc04e44"
	I1205 11:06:51.092367   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:06:51.092379   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:06:51.130933   11137 logs.go:123] Gathering logs for kube-apiserver [262c3ed215cb] ...
	I1205 11:06:51.130943   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262c3ed215cb"
	I1205 11:06:51.145901   11137 logs.go:123] Gathering logs for kube-apiserver [b4ab67c9a319] ...
	I1205 11:06:51.145911   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4ab67c9a319"
	I1205 11:06:51.173973   11137 logs.go:123] Gathering logs for kube-proxy [62422ea4292b] ...
	I1205 11:06:51.173984   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62422ea4292b"
	I1205 11:06:51.185648   11137 logs.go:123] Gathering logs for kube-controller-manager [ee1f0118663e] ...
	I1205 11:06:51.185679   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee1f0118663e"
	I1205 11:06:51.204389   11137 logs.go:123] Gathering logs for storage-provisioner [904e682b96d7] ...
	I1205 11:06:51.204399   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904e682b96d7"
	I1205 11:06:51.216005   11137 logs.go:123] Gathering logs for kube-scheduler [fb5ba8ab7ba0] ...
	I1205 11:06:51.216016   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb5ba8ab7ba0"
	I1205 11:06:51.231043   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:06:51.231054   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:06:51.247698   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:06:51.247710   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:06:51.291723   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:06:51.291733   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:06:51.296969   11137 logs.go:123] Gathering logs for kube-scheduler [40c6e6634eaf] ...
	I1205 11:06:51.296977   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c6e6634eaf"
	I1205 11:06:51.310355   11137 logs.go:123] Gathering logs for kube-controller-manager [6e40c464d81d] ...
	I1205 11:06:51.310364   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e40c464d81d"
	I1205 11:06:51.326002   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:06:51.326011   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:06:53.851758   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:06:58.853252   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:06:58.853799   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:06:58.892524   11137 logs.go:282] 2 containers: [262c3ed215cb b4ab67c9a319]
	I1205 11:06:58.892675   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:06:58.912288   11137 logs.go:282] 2 containers: [958667a8aed1 d83e8d46af5a]
	I1205 11:06:58.912402   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:06:58.929728   11137 logs.go:282] 1 containers: [8a5ec0469f49]
	I1205 11:06:58.929820   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:06:58.943208   11137 logs.go:282] 2 containers: [40c6e6634eaf fb5ba8ab7ba0]
	I1205 11:06:58.943293   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:06:58.953944   11137 logs.go:282] 1 containers: [62422ea4292b]
	I1205 11:06:58.954027   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:06:58.964400   11137 logs.go:282] 2 containers: [ee1f0118663e 6e40c464d81d]
	I1205 11:06:58.964470   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:06:58.979971   11137 logs.go:282] 0 containers: []
	W1205 11:06:58.979983   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:06:58.980053   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:06:58.990137   11137 logs.go:282] 2 containers: [e5971fc04e44 904e682b96d7]
	I1205 11:06:58.990154   11137 logs.go:123] Gathering logs for kube-controller-manager [ee1f0118663e] ...
	I1205 11:06:58.990161   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee1f0118663e"
	I1205 11:06:59.007465   11137 logs.go:123] Gathering logs for kube-controller-manager [6e40c464d81d] ...
	I1205 11:06:59.007476   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e40c464d81d"
	I1205 11:06:59.022773   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:06:59.022786   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:06:59.045824   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:06:59.045833   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:06:59.081216   11137 logs.go:123] Gathering logs for etcd [d83e8d46af5a] ...
	I1205 11:06:59.081229   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d83e8d46af5a"
	I1205 11:06:59.100797   11137 logs.go:123] Gathering logs for kube-scheduler [fb5ba8ab7ba0] ...
	I1205 11:06:59.100811   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb5ba8ab7ba0"
	I1205 11:06:59.115591   11137 logs.go:123] Gathering logs for storage-provisioner [e5971fc04e44] ...
	I1205 11:06:59.115604   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5971fc04e44"
	I1205 11:06:59.127609   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:06:59.127621   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:06:59.139802   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:06:59.139817   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:06:59.182277   11137 logs.go:123] Gathering logs for coredns [8a5ec0469f49] ...
	I1205 11:06:59.182288   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a5ec0469f49"
	I1205 11:06:59.193344   11137 logs.go:123] Gathering logs for storage-provisioner [904e682b96d7] ...
	I1205 11:06:59.193355   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904e682b96d7"
	I1205 11:06:59.204788   11137 logs.go:123] Gathering logs for etcd [958667a8aed1] ...
	I1205 11:06:59.204800   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 958667a8aed1"
	I1205 11:06:59.219135   11137 logs.go:123] Gathering logs for kube-proxy [62422ea4292b] ...
	I1205 11:06:59.219147   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62422ea4292b"
	I1205 11:06:59.235477   11137 logs.go:123] Gathering logs for kube-apiserver [b4ab67c9a319] ...
	I1205 11:06:59.235491   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4ab67c9a319"
	I1205 11:06:59.265083   11137 logs.go:123] Gathering logs for kube-scheduler [40c6e6634eaf] ...
	I1205 11:06:59.265094   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c6e6634eaf"
	I1205 11:06:59.277207   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:06:59.277218   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:06:59.281924   11137 logs.go:123] Gathering logs for kube-apiserver [262c3ed215cb] ...
	I1205 11:06:59.281931   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262c3ed215cb"
	I1205 11:07:01.799415   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:07:05.790114   11277 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/config.json ...
	I1205 11:07:05.790494   11277 machine.go:93] provisionDockerMachine start ...
	I1205 11:07:05.790589   11277 main.go:141] libmachine: Using SSH client type: native
	I1205 11:07:05.790821   11277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c6afc0] 0x102c6d800 <nil>  [] 0s} localhost 51987 <nil> <nil>}
	I1205 11:07:05.790829   11277 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 11:07:05.851394   11277 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 11:07:05.851409   11277 buildroot.go:166] provisioning hostname "stopped-upgrade-616000"
	I1205 11:07:05.851482   11277 main.go:141] libmachine: Using SSH client type: native
	I1205 11:07:05.851596   11277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c6afc0] 0x102c6d800 <nil>  [] 0s} localhost 51987 <nil> <nil>}
	I1205 11:07:05.851602   11277 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-616000 && echo "stopped-upgrade-616000" | sudo tee /etc/hostname
	I1205 11:07:05.912653   11277 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-616000
	
	I1205 11:07:05.912720   11277 main.go:141] libmachine: Using SSH client type: native
	I1205 11:07:05.912829   11277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c6afc0] 0x102c6d800 <nil>  [] 0s} localhost 51987 <nil> <nil>}
	I1205 11:07:05.912838   11277 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-616000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-616000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-616000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 11:07:05.975243   11277 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 11:07:05.975255   11277 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20052-8600/.minikube CaCertPath:/Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20052-8600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20052-8600/.minikube}
	I1205 11:07:05.975269   11277 buildroot.go:174] setting up certificates
	I1205 11:07:05.975273   11277 provision.go:84] configureAuth start
	I1205 11:07:05.975276   11277 provision.go:143] copyHostCerts
	I1205 11:07:05.975360   11277 exec_runner.go:144] found /Users/jenkins/minikube-integration/20052-8600/.minikube/ca.pem, removing ...
	I1205 11:07:05.975368   11277 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20052-8600/.minikube/ca.pem
	I1205 11:07:05.975496   11277 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20052-8600/.minikube/ca.pem (1082 bytes)
	I1205 11:07:05.975708   11277 exec_runner.go:144] found /Users/jenkins/minikube-integration/20052-8600/.minikube/cert.pem, removing ...
	I1205 11:07:05.975711   11277 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20052-8600/.minikube/cert.pem
	I1205 11:07:05.975767   11277 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20052-8600/.minikube/cert.pem (1123 bytes)
	I1205 11:07:05.975886   11277 exec_runner.go:144] found /Users/jenkins/minikube-integration/20052-8600/.minikube/key.pem, removing ...
	I1205 11:07:05.975889   11277 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20052-8600/.minikube/key.pem
	I1205 11:07:05.975939   11277 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20052-8600/.minikube/key.pem (1679 bytes)
	I1205 11:07:05.976034   11277 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-616000 san=[127.0.0.1 localhost minikube stopped-upgrade-616000]
	I1205 11:07:06.027537   11277 provision.go:177] copyRemoteCerts
	I1205 11:07:06.027587   11277 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 11:07:06.027594   11277 sshutil.go:53] new ssh client: &{IP:localhost Port:51987 SSHKeyPath:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/stopped-upgrade-616000/id_rsa Username:docker}
	I1205 11:07:06.058736   11277 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 11:07:06.065491   11277 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1205 11:07:06.072426   11277 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 11:07:06.079481   11277 provision.go:87] duration metric: took 104.205ms to configureAuth
	I1205 11:07:06.079490   11277 buildroot.go:189] setting minikube options for container-runtime
	I1205 11:07:06.079608   11277 config.go:182] Loaded profile config "stopped-upgrade-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1205 11:07:06.079654   11277 main.go:141] libmachine: Using SSH client type: native
	I1205 11:07:06.079738   11277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c6afc0] 0x102c6d800 <nil>  [] 0s} localhost 51987 <nil> <nil>}
	I1205 11:07:06.079743   11277 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1205 11:07:06.139653   11277 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1205 11:07:06.139663   11277 buildroot.go:70] root file system type: tmpfs
	I1205 11:07:06.139719   11277 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1205 11:07:06.139784   11277 main.go:141] libmachine: Using SSH client type: native
	I1205 11:07:06.139896   11277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c6afc0] 0x102c6d800 <nil>  [] 0s} localhost 51987 <nil> <nil>}
	I1205 11:07:06.139935   11277 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1205 11:07:06.205726   11277 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1205 11:07:06.205796   11277 main.go:141] libmachine: Using SSH client type: native
	I1205 11:07:06.205906   11277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c6afc0] 0x102c6d800 <nil>  [] 0s} localhost 51987 <nil> <nil>}
	I1205 11:07:06.205915   11277 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1205 11:07:06.570946   11277 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1205 11:07:06.570961   11277 machine.go:96] duration metric: took 780.467125ms to provisionDockerMachine
	I1205 11:07:06.570968   11277 start.go:293] postStartSetup for "stopped-upgrade-616000" (driver="qemu2")
	I1205 11:07:06.570975   11277 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 11:07:06.571051   11277 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 11:07:06.571061   11277 sshutil.go:53] new ssh client: &{IP:localhost Port:51987 SSHKeyPath:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/stopped-upgrade-616000/id_rsa Username:docker}
	I1205 11:07:06.602746   11277 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 11:07:06.604050   11277 info.go:137] Remote host: Buildroot 2021.02.12
	I1205 11:07:06.604058   11277 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20052-8600/.minikube/addons for local assets ...
	I1205 11:07:06.604151   11277 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20052-8600/.minikube/files for local assets ...
	I1205 11:07:06.604292   11277 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20052-8600/.minikube/files/etc/ssl/certs/91362.pem -> 91362.pem in /etc/ssl/certs
	I1205 11:07:06.604460   11277 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 11:07:06.607485   11277 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/files/etc/ssl/certs/91362.pem --> /etc/ssl/certs/91362.pem (1708 bytes)
	I1205 11:07:06.614699   11277 start.go:296] duration metric: took 43.726ms for postStartSetup
	I1205 11:07:06.614713   11277 fix.go:56] duration metric: took 20.716471834s for fixHost
	I1205 11:07:06.614756   11277 main.go:141] libmachine: Using SSH client type: native
	I1205 11:07:06.614854   11277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c6afc0] 0x102c6d800 <nil>  [] 0s} localhost 51987 <nil> <nil>}
	I1205 11:07:06.614865   11277 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 11:07:06.672418   11277 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733425626.957073212
	
	I1205 11:07:06.672429   11277 fix.go:216] guest clock: 1733425626.957073212
	I1205 11:07:06.672433   11277 fix.go:229] Guest: 2024-12-05 11:07:06.957073212 -0800 PST Remote: 2024-12-05 11:07:06.614715 -0800 PST m=+20.837003751 (delta=342.358212ms)
	I1205 11:07:06.672444   11277 fix.go:200] guest clock delta is within tolerance: 342.358212ms
	I1205 11:07:06.672448   11277 start.go:83] releasing machines lock for "stopped-upgrade-616000", held for 20.774214375s
	I1205 11:07:06.672527   11277 ssh_runner.go:195] Run: cat /version.json
	I1205 11:07:06.672528   11277 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 11:07:06.672539   11277 sshutil.go:53] new ssh client: &{IP:localhost Port:51987 SSHKeyPath:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/stopped-upgrade-616000/id_rsa Username:docker}
	I1205 11:07:06.672546   11277 sshutil.go:53] new ssh client: &{IP:localhost Port:51987 SSHKeyPath:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/stopped-upgrade-616000/id_rsa Username:docker}
	W1205 11:07:06.673158   11277 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51987: connect: connection refused
	I1205 11:07:06.673181   11277 retry.go:31] will retry after 177.300471ms: dial tcp [::1]:51987: connect: connection refused
	W1205 11:07:06.705013   11277 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1205 11:07:06.705065   11277 ssh_runner.go:195] Run: systemctl --version
	I1205 11:07:06.707050   11277 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 11:07:06.708936   11277 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 11:07:06.708976   11277 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1205 11:07:06.712004   11277 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1205 11:07:06.716734   11277 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 11:07:06.716744   11277 start.go:495] detecting cgroup driver to use...
	I1205 11:07:06.716814   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 11:07:06.723995   11277 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1205 11:07:06.727378   11277 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 11:07:06.730698   11277 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 11:07:06.730737   11277 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 11:07:06.733840   11277 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 11:07:06.736732   11277 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 11:07:06.740004   11277 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 11:07:06.743412   11277 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 11:07:06.746656   11277 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 11:07:06.749512   11277 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 11:07:06.752282   11277 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 11:07:06.755506   11277 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 11:07:06.758676   11277 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 11:07:06.761361   11277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 11:07:06.852396   11277 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 11:07:06.862244   11277 start.go:495] detecting cgroup driver to use...
	I1205 11:07:06.862336   11277 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1205 11:07:06.873694   11277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 11:07:06.889551   11277 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 11:07:06.906323   11277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 11:07:06.938253   11277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 11:07:06.943790   11277 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1205 11:07:07.009343   11277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 11:07:07.014550   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 11:07:07.020372   11277 ssh_runner.go:195] Run: which cri-dockerd
	I1205 11:07:07.021812   11277 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1205 11:07:07.024607   11277 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1205 11:07:07.029971   11277 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1205 11:07:07.101201   11277 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1205 11:07:07.182868   11277 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1205 11:07:07.182927   11277 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1205 11:07:07.188702   11277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 11:07:07.275341   11277 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1205 11:07:08.424376   11277 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.1490185s)
	I1205 11:07:08.424464   11277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1205 11:07:08.429354   11277 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1205 11:07:08.435647   11277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 11:07:08.440819   11277 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1205 11:07:08.524492   11277 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1205 11:07:08.595850   11277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 11:07:08.683514   11277 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1205 11:07:08.690069   11277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 11:07:08.694562   11277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 11:07:08.757763   11277 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1205 11:07:08.795631   11277 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1205 11:07:08.795718   11277 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1205 11:07:08.797717   11277 start.go:563] Will wait 60s for crictl version
	I1205 11:07:08.797779   11277 ssh_runner.go:195] Run: which crictl
	I1205 11:07:08.799249   11277 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 11:07:08.814813   11277 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1205 11:07:08.814890   11277 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 11:07:08.832322   11277 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 11:07:06.802011   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:07:06.802171   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:07:06.821305   11137 logs.go:282] 2 containers: [262c3ed215cb b4ab67c9a319]
	I1205 11:07:06.821384   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:07:06.834464   11137 logs.go:282] 2 containers: [958667a8aed1 d83e8d46af5a]
	I1205 11:07:06.834548   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:07:06.845334   11137 logs.go:282] 1 containers: [8a5ec0469f49]
	I1205 11:07:06.845407   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:07:06.857376   11137 logs.go:282] 2 containers: [40c6e6634eaf fb5ba8ab7ba0]
	I1205 11:07:06.857481   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:07:06.874189   11137 logs.go:282] 1 containers: [62422ea4292b]
	I1205 11:07:06.874255   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:07:06.890333   11137 logs.go:282] 2 containers: [ee1f0118663e 6e40c464d81d]
	I1205 11:07:06.890394   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:07:06.902576   11137 logs.go:282] 0 containers: []
	W1205 11:07:06.902590   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:07:06.902655   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:07:06.915244   11137 logs.go:282] 2 containers: [e5971fc04e44 904e682b96d7]
	I1205 11:07:06.915263   11137 logs.go:123] Gathering logs for storage-provisioner [e5971fc04e44] ...
	I1205 11:07:06.915269   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5971fc04e44"
	I1205 11:07:06.928222   11137 logs.go:123] Gathering logs for kube-controller-manager [6e40c464d81d] ...
	I1205 11:07:06.928233   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e40c464d81d"
	I1205 11:07:06.944007   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:07:06.944017   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:07:06.956225   11137 logs.go:123] Gathering logs for etcd [958667a8aed1] ...
	I1205 11:07:06.956241   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 958667a8aed1"
	I1205 11:07:06.970354   11137 logs.go:123] Gathering logs for coredns [8a5ec0469f49] ...
	I1205 11:07:06.970363   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a5ec0469f49"
	I1205 11:07:06.981789   11137 logs.go:123] Gathering logs for kube-scheduler [40c6e6634eaf] ...
	I1205 11:07:06.981804   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c6e6634eaf"
	I1205 11:07:06.993295   11137 logs.go:123] Gathering logs for kube-scheduler [fb5ba8ab7ba0] ...
	I1205 11:07:06.993308   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb5ba8ab7ba0"
	I1205 11:07:07.008362   11137 logs.go:123] Gathering logs for kube-proxy [62422ea4292b] ...
	I1205 11:07:07.008375   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62422ea4292b"
	I1205 11:07:07.021599   11137 logs.go:123] Gathering logs for kube-controller-manager [ee1f0118663e] ...
	I1205 11:07:07.021607   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee1f0118663e"
	I1205 11:07:07.040921   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:07:07.040932   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:07:07.064085   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:07:07.064105   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:07:07.106636   11137 logs.go:123] Gathering logs for kube-apiserver [262c3ed215cb] ...
	I1205 11:07:07.106653   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262c3ed215cb"
	I1205 11:07:07.121756   11137 logs.go:123] Gathering logs for kube-apiserver [b4ab67c9a319] ...
	I1205 11:07:07.121769   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4ab67c9a319"
	I1205 11:07:07.150466   11137 logs.go:123] Gathering logs for etcd [d83e8d46af5a] ...
	I1205 11:07:07.150480   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d83e8d46af5a"
	I1205 11:07:07.165208   11137 logs.go:123] Gathering logs for storage-provisioner [904e682b96d7] ...
	I1205 11:07:07.165219   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904e682b96d7"
	I1205 11:07:07.176581   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:07:07.176593   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:07:07.181446   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:07:07.181456   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:07:08.852196   11277 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1205 11:07:08.852286   11277 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1205 11:07:08.853705   11277 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 11:07:08.857347   11277 kubeadm.go:883] updating cluster {Name:stopped-upgrade-616000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52022 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-616000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1205 11:07:08.857395   11277 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1205 11:07:08.857443   11277 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1205 11:07:08.868126   11277 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1205 11:07:08.868135   11277 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1205 11:07:08.868195   11277 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1205 11:07:08.871802   11277 ssh_runner.go:195] Run: which lz4
	I1205 11:07:08.872980   11277 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 11:07:08.874284   11277 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 11:07:08.874292   11277 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1205 11:07:09.799856   11277 docker.go:653] duration metric: took 926.925916ms to copy over tarball
	I1205 11:07:09.799928   11277 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 11:07:09.718680   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:07:10.971867   11277 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.171934834s)
	I1205 11:07:10.971882   11277 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 11:07:10.988209   11277 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1205 11:07:10.991843   11277 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1205 11:07:10.997262   11277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 11:07:11.075722   11277 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1205 11:07:12.631648   11277 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.555926333s)
	I1205 11:07:12.631743   11277 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1205 11:07:12.643041   11277 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1205 11:07:12.643052   11277 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1205 11:07:12.643057   11277 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 11:07:12.648898   11277 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 11:07:12.650777   11277 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1205 11:07:12.652461   11277 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1205 11:07:12.652459   11277 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 11:07:12.654179   11277 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1205 11:07:12.654376   11277 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1205 11:07:12.655991   11277 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1205 11:07:12.655962   11277 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1205 11:07:12.657305   11277 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1205 11:07:12.657384   11277 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1205 11:07:12.658589   11277 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1205 11:07:12.658714   11277 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1205 11:07:12.659875   11277 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1205 11:07:12.660006   11277 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1205 11:07:12.661032   11277 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1205 11:07:12.661741   11277 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1205 11:07:13.189827   11277 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1205 11:07:13.200419   11277 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1205 11:07:13.200455   11277 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1205 11:07:13.200515   11277 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1205 11:07:13.211501   11277 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1205 11:07:13.230857   11277 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1205 11:07:13.241540   11277 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1205 11:07:13.241576   11277 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1205 11:07:13.241644   11277 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1205 11:07:13.243159   11277 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1205 11:07:13.253432   11277 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1205 11:07:13.260907   11277 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1205 11:07:13.260930   11277 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1205 11:07:13.261001   11277 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1205 11:07:13.271406   11277 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1205 11:07:13.312809   11277 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1205 11:07:13.323288   11277 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1205 11:07:13.323313   11277 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1205 11:07:13.323379   11277 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1205 11:07:13.335240   11277 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1205 11:07:13.337903   11277 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1205 11:07:13.346951   11277 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1205 11:07:13.346975   11277 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1205 11:07:13.347052   11277 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1205 11:07:13.357071   11277 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1205 11:07:13.431160   11277 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1205 11:07:13.440908   11277 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1205 11:07:13.440927   11277 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1205 11:07:13.440987   11277 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1205 11:07:13.450757   11277 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1205 11:07:13.450901   11277 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1205 11:07:13.453265   11277 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1205 11:07:13.453278   11277 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1205 11:07:13.461409   11277 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1205 11:07:13.461417   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1205 11:07:13.486636   11277 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W1205 11:07:13.549466   11277 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1205 11:07:13.549621   11277 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1205 11:07:13.560640   11277 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1205 11:07:13.560666   11277 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1205 11:07:13.560734   11277 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1205 11:07:13.570818   11277 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1205 11:07:13.570969   11277 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1205 11:07:13.572445   11277 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1205 11:07:13.572457   11277 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1205 11:07:13.617167   11277 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1205 11:07:13.617180   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1205 11:07:13.656281   11277 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W1205 11:07:13.664214   11277 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1205 11:07:13.664349   11277 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 11:07:13.674668   11277 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1205 11:07:13.674696   11277 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 11:07:13.674757   11277 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 11:07:13.688899   11277 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 11:07:13.689038   11277 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1205 11:07:13.690351   11277 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1205 11:07:13.690363   11277 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1205 11:07:13.719544   11277 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 11:07:13.719557   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1205 11:07:13.959591   11277 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1205 11:07:13.959632   11277 cache_images.go:92] duration metric: took 1.316580958s to LoadCachedImages
	W1205 11:07:13.959672   11277 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I1205 11:07:13.959676   11277 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1205 11:07:13.959728   11277 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-616000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-616000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 11:07:13.959801   11277 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1205 11:07:13.973930   11277 cni.go:84] Creating CNI manager for ""
	I1205 11:07:13.973946   11277 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:07:13.973953   11277 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 11:07:13.973961   11277 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-616000 NodeName:stopped-upgrade-616000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 11:07:13.974046   11277 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-616000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 11:07:13.974125   11277 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1205 11:07:13.976878   11277 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 11:07:13.976915   11277 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 11:07:13.979721   11277 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1205 11:07:13.984912   11277 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 11:07:13.989696   11277 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1205 11:07:13.994792   11277 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1205 11:07:13.996041   11277 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 11:07:14.000076   11277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 11:07:14.080441   11277 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 11:07:14.089545   11277 certs.go:68] Setting up /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000 for IP: 10.0.2.15
	I1205 11:07:14.089558   11277 certs.go:194] generating shared ca certs ...
	I1205 11:07:14.089567   11277 certs.go:226] acquiring lock for ca certs: {Name:mk120c2a781c4636bd95493f524c24b1dcf3780a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:07:14.089759   11277 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20052-8600/.minikube/ca.key
	I1205 11:07:14.090523   11277 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20052-8600/.minikube/proxy-client-ca.key
	I1205 11:07:14.090531   11277 certs.go:256] generating profile certs ...
	I1205 11:07:14.090830   11277 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/client.key
	I1205 11:07:14.090850   11277 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/apiserver.key.80122213
	I1205 11:07:14.090859   11277 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/apiserver.crt.80122213 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1205 11:07:14.163734   11277 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/apiserver.crt.80122213 ...
	I1205 11:07:14.163753   11277 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/apiserver.crt.80122213: {Name:mk558acf8deae327405a8215bab480af41d675bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:07:14.164126   11277 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/apiserver.key.80122213 ...
	I1205 11:07:14.164131   11277 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/apiserver.key.80122213: {Name:mk6c029614b2bb5f744c5800561c046feb5faba9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:07:14.164314   11277 certs.go:381] copying /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/apiserver.crt.80122213 -> /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/apiserver.crt
	I1205 11:07:14.164444   11277 certs.go:385] copying /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/apiserver.key.80122213 -> /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/apiserver.key
	I1205 11:07:14.164759   11277 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/proxy-client.key
	I1205 11:07:14.164953   11277 certs.go:484] found cert: /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/9136.pem (1338 bytes)
	W1205 11:07:14.165168   11277 certs.go:480] ignoring /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/9136_empty.pem, impossibly tiny 0 bytes
	I1205 11:07:14.165176   11277 certs.go:484] found cert: /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 11:07:14.165201   11277 certs.go:484] found cert: /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem (1082 bytes)
	I1205 11:07:14.165222   11277 certs.go:484] found cert: /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem (1123 bytes)
	I1205 11:07:14.165246   11277 certs.go:484] found cert: /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/key.pem (1679 bytes)
	I1205 11:07:14.165292   11277 certs.go:484] found cert: /Users/jenkins/minikube-integration/20052-8600/.minikube/files/etc/ssl/certs/91362.pem (1708 bytes)
	I1205 11:07:14.165681   11277 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 11:07:14.172429   11277 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 11:07:14.180127   11277 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 11:07:14.187031   11277 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 11:07:14.193536   11277 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1205 11:07:14.200238   11277 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 11:07:14.207484   11277 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 11:07:14.214981   11277 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 11:07:14.222304   11277 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/files/etc/ssl/certs/91362.pem --> /usr/share/ca-certificates/91362.pem (1708 bytes)
	I1205 11:07:14.229213   11277 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 11:07:14.235983   11277 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/9136.pem --> /usr/share/ca-certificates/9136.pem (1338 bytes)
	I1205 11:07:14.243219   11277 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 11:07:14.248290   11277 ssh_runner.go:195] Run: openssl version
	I1205 11:07:14.250181   11277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/91362.pem && ln -fs /usr/share/ca-certificates/91362.pem /etc/ssl/certs/91362.pem"
	I1205 11:07:14.253100   11277 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91362.pem
	I1205 11:07:14.254435   11277 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 18:50 /usr/share/ca-certificates/91362.pem
	I1205 11:07:14.254462   11277 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91362.pem
	I1205 11:07:14.256150   11277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/91362.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 11:07:14.259710   11277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 11:07:14.263071   11277 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 11:07:14.264457   11277 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 11:07:14.264486   11277 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 11:07:14.266207   11277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 11:07:14.269229   11277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9136.pem && ln -fs /usr/share/ca-certificates/9136.pem /etc/ssl/certs/9136.pem"
	I1205 11:07:14.272313   11277 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9136.pem
	I1205 11:07:14.273671   11277 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 18:50 /usr/share/ca-certificates/9136.pem
	I1205 11:07:14.273695   11277 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9136.pem
	I1205 11:07:14.275376   11277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9136.pem /etc/ssl/certs/51391683.0"
	I1205 11:07:14.278769   11277 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 11:07:14.280139   11277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 11:07:14.282971   11277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 11:07:14.285237   11277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 11:07:14.287339   11277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 11:07:14.289215   11277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 11:07:14.290989   11277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 11:07:14.293124   11277 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-616000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52022 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-616000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1205 11:07:14.293215   11277 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1205 11:07:14.303255   11277 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 11:07:14.306729   11277 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 11:07:14.306739   11277 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 11:07:14.306770   11277 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 11:07:14.309594   11277 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 11:07:14.309886   11277 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-616000" does not appear in /Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 11:07:14.310013   11277 kubeconfig.go:62] /Users/jenkins/minikube-integration/20052-8600/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-616000" cluster setting kubeconfig missing "stopped-upgrade-616000" context setting]
	I1205 11:07:14.310209   11277 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20052-8600/kubeconfig: {Name:mkb6577356fc2312bf9b329fd967969d2d30b8a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:07:14.310625   11277 kapi.go:59] client config for stopped-upgrade-616000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/client.key", CAFile:"/Users/jenkins/minikube-integration/20052-8600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1046c7740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 11:07:14.311122   11277 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 11:07:14.313954   11277 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-616000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1205 11:07:14.313959   11277 kubeadm.go:1160] stopping kube-system containers ...
	I1205 11:07:14.314005   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1205 11:07:14.325294   11277 docker.go:483] Stopping containers: [bd0711c054c5 0275d18bc05a c475eaff13ec 1447f2c97140 b8c08aff7dab c744ec1de700 0279ac793008 d31b4a0b903b]
	I1205 11:07:14.325377   11277 ssh_runner.go:195] Run: docker stop bd0711c054c5 0275d18bc05a c475eaff13ec 1447f2c97140 b8c08aff7dab c744ec1de700 0279ac793008 d31b4a0b903b
	I1205 11:07:14.335694   11277 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 11:07:14.341573   11277 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 11:07:14.344270   11277 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 11:07:14.344275   11277 kubeadm.go:157] found existing configuration files:
	
	I1205 11:07:14.344300   11277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52022 /etc/kubernetes/admin.conf
	I1205 11:07:14.347512   11277 kubeadm.go:163] "https://control-plane.minikube.internal:52022" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52022 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 11:07:14.347543   11277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 11:07:14.350778   11277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52022 /etc/kubernetes/kubelet.conf
	I1205 11:07:14.353313   11277 kubeadm.go:163] "https://control-plane.minikube.internal:52022" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52022 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 11:07:14.353353   11277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 11:07:14.356007   11277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52022 /etc/kubernetes/controller-manager.conf
	I1205 11:07:14.359310   11277 kubeadm.go:163] "https://control-plane.minikube.internal:52022" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52022 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 11:07:14.359348   11277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 11:07:14.362286   11277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52022 /etc/kubernetes/scheduler.conf
	I1205 11:07:14.364627   11277 kubeadm.go:163] "https://control-plane.minikube.internal:52022" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52022 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 11:07:14.364652   11277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 11:07:14.367632   11277 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 11:07:14.370950   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 11:07:14.397134   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 11:07:14.744507   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 11:07:14.872788   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 11:07:14.896881   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 11:07:14.927413   11277 api_server.go:52] waiting for apiserver process to appear ...
	I1205 11:07:14.927503   11277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 11:07:15.429580   11277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 11:07:14.720818   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:07:14.720951   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:07:14.733072   11137 logs.go:282] 2 containers: [262c3ed215cb b4ab67c9a319]
	I1205 11:07:14.733152   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:07:14.744867   11137 logs.go:282] 2 containers: [958667a8aed1 d83e8d46af5a]
	I1205 11:07:14.744948   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:07:14.756029   11137 logs.go:282] 1 containers: [8a5ec0469f49]
	I1205 11:07:14.756107   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:07:14.767350   11137 logs.go:282] 2 containers: [40c6e6634eaf fb5ba8ab7ba0]
	I1205 11:07:14.767434   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:07:14.779238   11137 logs.go:282] 1 containers: [62422ea4292b]
	I1205 11:07:14.779325   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:07:14.790926   11137 logs.go:282] 2 containers: [ee1f0118663e 6e40c464d81d]
	I1205 11:07:14.791014   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:07:14.801765   11137 logs.go:282] 0 containers: []
	W1205 11:07:14.801779   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:07:14.801848   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:07:14.812657   11137 logs.go:282] 2 containers: [e5971fc04e44 904e682b96d7]
	I1205 11:07:14.812675   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:07:14.812682   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:07:14.850309   11137 logs.go:123] Gathering logs for coredns [8a5ec0469f49] ...
	I1205 11:07:14.850324   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a5ec0469f49"
	I1205 11:07:14.870518   11137 logs.go:123] Gathering logs for kube-controller-manager [6e40c464d81d] ...
	I1205 11:07:14.870532   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e40c464d81d"
	I1205 11:07:14.913657   11137 logs.go:123] Gathering logs for kube-apiserver [b4ab67c9a319] ...
	I1205 11:07:14.913674   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4ab67c9a319"
	I1205 11:07:14.950198   11137 logs.go:123] Gathering logs for etcd [958667a8aed1] ...
	I1205 11:07:14.950214   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 958667a8aed1"
	I1205 11:07:14.965892   11137 logs.go:123] Gathering logs for kube-scheduler [fb5ba8ab7ba0] ...
	I1205 11:07:14.965904   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb5ba8ab7ba0"
	I1205 11:07:14.981548   11137 logs.go:123] Gathering logs for kube-controller-manager [ee1f0118663e] ...
	I1205 11:07:14.981567   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee1f0118663e"
	I1205 11:07:14.999809   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:07:14.999820   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:07:15.024494   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:07:15.024503   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:07:15.065981   11137 logs.go:123] Gathering logs for etcd [d83e8d46af5a] ...
	I1205 11:07:15.065998   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d83e8d46af5a"
	I1205 11:07:15.082083   11137 logs.go:123] Gathering logs for storage-provisioner [e5971fc04e44] ...
	I1205 11:07:15.082099   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5971fc04e44"
	I1205 11:07:15.096671   11137 logs.go:123] Gathering logs for storage-provisioner [904e682b96d7] ...
	I1205 11:07:15.096683   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904e682b96d7"
	I1205 11:07:15.108321   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:07:15.108334   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:07:15.120217   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:07:15.120229   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:07:15.124886   11137 logs.go:123] Gathering logs for kube-apiserver [262c3ed215cb] ...
	I1205 11:07:15.124897   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262c3ed215cb"
	I1205 11:07:15.138681   11137 logs.go:123] Gathering logs for kube-scheduler [40c6e6634eaf] ...
	I1205 11:07:15.138693   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c6e6634eaf"
	I1205 11:07:15.150784   11137 logs.go:123] Gathering logs for kube-proxy [62422ea4292b] ...
	I1205 11:07:15.150795   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62422ea4292b"
	I1205 11:07:17.664635   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:07:15.929634   11277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 11:07:15.934485   11277 api_server.go:72] duration metric: took 1.007083166s to wait for apiserver process to appear ...
	I1205 11:07:15.934496   11277 api_server.go:88] waiting for apiserver healthz status ...
	I1205 11:07:15.934511   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:07:22.667117   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:07:22.667283   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:07:22.683873   11137 logs.go:282] 2 containers: [262c3ed215cb b4ab67c9a319]
	I1205 11:07:22.683964   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:07:22.697294   11137 logs.go:282] 2 containers: [958667a8aed1 d83e8d46af5a]
	I1205 11:07:22.697375   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:07:22.714720   11137 logs.go:282] 1 containers: [8a5ec0469f49]
	I1205 11:07:22.714798   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:07:22.728892   11137 logs.go:282] 2 containers: [40c6e6634eaf fb5ba8ab7ba0]
	I1205 11:07:22.728973   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:07:22.739813   11137 logs.go:282] 1 containers: [62422ea4292b]
	I1205 11:07:22.739902   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:07:22.750651   11137 logs.go:282] 2 containers: [ee1f0118663e 6e40c464d81d]
	I1205 11:07:22.750730   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:07:22.761150   11137 logs.go:282] 0 containers: []
	W1205 11:07:22.761163   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:07:22.761235   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:07:22.771735   11137 logs.go:282] 2 containers: [e5971fc04e44 904e682b96d7]
	I1205 11:07:22.771757   11137 logs.go:123] Gathering logs for storage-provisioner [904e682b96d7] ...
	I1205 11:07:22.771764   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904e682b96d7"
	I1205 11:07:22.783692   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:07:22.783707   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:07:22.820380   11137 logs.go:123] Gathering logs for etcd [958667a8aed1] ...
	I1205 11:07:22.820394   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 958667a8aed1"
	I1205 11:07:22.834714   11137 logs.go:123] Gathering logs for kube-scheduler [40c6e6634eaf] ...
	I1205 11:07:22.834728   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c6e6634eaf"
	I1205 11:07:22.851015   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:07:22.851025   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:07:22.893948   11137 logs.go:123] Gathering logs for kube-controller-manager [6e40c464d81d] ...
	I1205 11:07:22.893960   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e40c464d81d"
	I1205 11:07:22.909758   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:07:22.909771   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:07:22.932738   11137 logs.go:123] Gathering logs for storage-provisioner [e5971fc04e44] ...
	I1205 11:07:22.932746   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5971fc04e44"
	I1205 11:07:22.944239   11137 logs.go:123] Gathering logs for etcd [d83e8d46af5a] ...
	I1205 11:07:22.944249   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d83e8d46af5a"
	I1205 11:07:22.959840   11137 logs.go:123] Gathering logs for coredns [8a5ec0469f49] ...
	I1205 11:07:22.959853   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a5ec0469f49"
	I1205 11:07:22.970685   11137 logs.go:123] Gathering logs for kube-proxy [62422ea4292b] ...
	I1205 11:07:22.970697   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62422ea4292b"
	I1205 11:07:22.990288   11137 logs.go:123] Gathering logs for kube-scheduler [fb5ba8ab7ba0] ...
	I1205 11:07:22.990302   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb5ba8ab7ba0"
	I1205 11:07:23.008717   11137 logs.go:123] Gathering logs for kube-controller-manager [ee1f0118663e] ...
	I1205 11:07:23.008728   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee1f0118663e"
	I1205 11:07:23.026068   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:07:23.026078   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:07:23.038628   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:07:23.038642   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:07:23.043345   11137 logs.go:123] Gathering logs for kube-apiserver [262c3ed215cb] ...
	I1205 11:07:23.043352   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262c3ed215cb"
	I1205 11:07:23.058807   11137 logs.go:123] Gathering logs for kube-apiserver [b4ab67c9a319] ...
	I1205 11:07:23.058817   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4ab67c9a319"
	I1205 11:07:20.936594   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:07:20.936661   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:07:25.589867   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:07:25.936994   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:07:25.937018   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:07:30.592132   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:07:30.592364   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:07:30.608924   11137 logs.go:282] 2 containers: [262c3ed215cb b4ab67c9a319]
	I1205 11:07:30.609039   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:07:30.621868   11137 logs.go:282] 2 containers: [958667a8aed1 d83e8d46af5a]
	I1205 11:07:30.621953   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:07:30.632918   11137 logs.go:282] 1 containers: [8a5ec0469f49]
	I1205 11:07:30.632994   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:07:30.643495   11137 logs.go:282] 2 containers: [40c6e6634eaf fb5ba8ab7ba0]
	I1205 11:07:30.643579   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:07:30.654711   11137 logs.go:282] 1 containers: [62422ea4292b]
	I1205 11:07:30.654799   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:07:30.665361   11137 logs.go:282] 2 containers: [ee1f0118663e 6e40c464d81d]
	I1205 11:07:30.665437   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:07:30.675631   11137 logs.go:282] 0 containers: []
	W1205 11:07:30.675644   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:07:30.675717   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:07:30.686495   11137 logs.go:282] 2 containers: [e5971fc04e44 904e682b96d7]
	I1205 11:07:30.686516   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:07:30.686522   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:07:30.727126   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:07:30.727136   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:07:30.762416   11137 logs.go:123] Gathering logs for kube-scheduler [40c6e6634eaf] ...
	I1205 11:07:30.762427   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c6e6634eaf"
	I1205 11:07:30.774023   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:07:30.774036   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:07:30.778550   11137 logs.go:123] Gathering logs for kube-scheduler [fb5ba8ab7ba0] ...
	I1205 11:07:30.778560   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb5ba8ab7ba0"
	I1205 11:07:30.793344   11137 logs.go:123] Gathering logs for kube-apiserver [b4ab67c9a319] ...
	I1205 11:07:30.793354   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4ab67c9a319"
	I1205 11:07:30.829009   11137 logs.go:123] Gathering logs for kube-proxy [62422ea4292b] ...
	I1205 11:07:30.829021   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62422ea4292b"
	I1205 11:07:30.842730   11137 logs.go:123] Gathering logs for kube-controller-manager [ee1f0118663e] ...
	I1205 11:07:30.842745   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee1f0118663e"
	I1205 11:07:30.864893   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:07:30.864905   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:07:30.887264   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:07:30.887274   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:07:30.899864   11137 logs.go:123] Gathering logs for storage-provisioner [904e682b96d7] ...
	I1205 11:07:30.899877   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904e682b96d7"
	I1205 11:07:30.912168   11137 logs.go:123] Gathering logs for kube-apiserver [262c3ed215cb] ...
	I1205 11:07:30.912180   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262c3ed215cb"
	I1205 11:07:30.926732   11137 logs.go:123] Gathering logs for etcd [958667a8aed1] ...
	I1205 11:07:30.926746   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 958667a8aed1"
	I1205 11:07:30.940879   11137 logs.go:123] Gathering logs for etcd [d83e8d46af5a] ...
	I1205 11:07:30.940888   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d83e8d46af5a"
	I1205 11:07:30.955382   11137 logs.go:123] Gathering logs for coredns [8a5ec0469f49] ...
	I1205 11:07:30.955394   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a5ec0469f49"
	I1205 11:07:30.971260   11137 logs.go:123] Gathering logs for kube-controller-manager [6e40c464d81d] ...
	I1205 11:07:30.971273   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e40c464d81d"
	I1205 11:07:30.987591   11137 logs.go:123] Gathering logs for storage-provisioner [e5971fc04e44] ...
	I1205 11:07:30.987601   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5971fc04e44"
	I1205 11:07:33.500632   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:07:30.937657   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:07:30.937682   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:07:38.502782   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:07:38.502910   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:07:38.514251   11137 logs.go:282] 2 containers: [262c3ed215cb b4ab67c9a319]
	I1205 11:07:38.514334   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:07:38.525434   11137 logs.go:282] 2 containers: [958667a8aed1 d83e8d46af5a]
	I1205 11:07:38.525518   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:07:38.535783   11137 logs.go:282] 1 containers: [8a5ec0469f49]
	I1205 11:07:38.535866   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:07:38.549185   11137 logs.go:282] 2 containers: [40c6e6634eaf fb5ba8ab7ba0]
	I1205 11:07:38.549268   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:07:38.559657   11137 logs.go:282] 1 containers: [62422ea4292b]
	I1205 11:07:38.559738   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:07:38.570051   11137 logs.go:282] 2 containers: [ee1f0118663e 6e40c464d81d]
	I1205 11:07:38.570127   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:07:38.580216   11137 logs.go:282] 0 containers: []
	W1205 11:07:38.580231   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:07:38.580298   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:07:38.590940   11137 logs.go:282] 2 containers: [e5971fc04e44 904e682b96d7]
	I1205 11:07:38.590959   11137 logs.go:123] Gathering logs for storage-provisioner [e5971fc04e44] ...
	I1205 11:07:38.590965   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5971fc04e44"
	I1205 11:07:38.602269   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:07:38.602279   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:07:38.645092   11137 logs.go:123] Gathering logs for kube-apiserver [262c3ed215cb] ...
	I1205 11:07:38.645101   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262c3ed215cb"
	I1205 11:07:38.658973   11137 logs.go:123] Gathering logs for etcd [d83e8d46af5a] ...
	I1205 11:07:38.658984   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d83e8d46af5a"
	I1205 11:07:38.674039   11137 logs.go:123] Gathering logs for coredns [8a5ec0469f49] ...
	I1205 11:07:38.674051   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a5ec0469f49"
	I1205 11:07:38.685057   11137 logs.go:123] Gathering logs for kube-scheduler [40c6e6634eaf] ...
	I1205 11:07:38.685068   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c6e6634eaf"
	I1205 11:07:38.696779   11137 logs.go:123] Gathering logs for kube-controller-manager [6e40c464d81d] ...
	I1205 11:07:38.696790   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e40c464d81d"
	I1205 11:07:38.712742   11137 logs.go:123] Gathering logs for storage-provisioner [904e682b96d7] ...
	I1205 11:07:38.712755   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904e682b96d7"
	I1205 11:07:38.724492   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:07:38.724504   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:07:38.748809   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:07:38.748822   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:07:38.760518   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:07:38.760532   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:07:38.765048   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:07:38.765059   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:07:38.799473   11137 logs.go:123] Gathering logs for etcd [958667a8aed1] ...
	I1205 11:07:38.799485   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 958667a8aed1"
	I1205 11:07:38.816510   11137 logs.go:123] Gathering logs for kube-scheduler [fb5ba8ab7ba0] ...
	I1205 11:07:38.816522   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb5ba8ab7ba0"
	I1205 11:07:38.831892   11137 logs.go:123] Gathering logs for kube-proxy [62422ea4292b] ...
	I1205 11:07:38.831906   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62422ea4292b"
	I1205 11:07:38.844151   11137 logs.go:123] Gathering logs for kube-apiserver [b4ab67c9a319] ...
	I1205 11:07:38.844162   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4ab67c9a319"
	I1205 11:07:38.873572   11137 logs.go:123] Gathering logs for kube-controller-manager [ee1f0118663e] ...
	I1205 11:07:38.873585   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee1f0118663e"
	I1205 11:07:35.938176   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:07:35.938217   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:07:41.393837   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:07:40.938875   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:07:40.938917   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:07:46.396092   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:07:46.396236   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:07:46.408278   11137 logs.go:282] 2 containers: [262c3ed215cb b4ab67c9a319]
	I1205 11:07:46.408366   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:07:46.422106   11137 logs.go:282] 2 containers: [958667a8aed1 d83e8d46af5a]
	I1205 11:07:46.422189   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:07:46.432881   11137 logs.go:282] 1 containers: [8a5ec0469f49]
	I1205 11:07:46.432964   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:07:46.445067   11137 logs.go:282] 2 containers: [40c6e6634eaf fb5ba8ab7ba0]
	I1205 11:07:46.445145   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:07:46.455529   11137 logs.go:282] 1 containers: [62422ea4292b]
	I1205 11:07:46.455609   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:07:46.466058   11137 logs.go:282] 2 containers: [ee1f0118663e 6e40c464d81d]
	I1205 11:07:46.466127   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:07:46.476456   11137 logs.go:282] 0 containers: []
	W1205 11:07:46.476468   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:07:46.476538   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:07:46.487379   11137 logs.go:282] 2 containers: [e5971fc04e44 904e682b96d7]
	I1205 11:07:46.487398   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:07:46.487404   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:07:46.492211   11137 logs.go:123] Gathering logs for kube-scheduler [fb5ba8ab7ba0] ...
	I1205 11:07:46.492220   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb5ba8ab7ba0"
	I1205 11:07:46.506925   11137 logs.go:123] Gathering logs for kube-proxy [62422ea4292b] ...
	I1205 11:07:46.506935   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62422ea4292b"
	I1205 11:07:46.518613   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:07:46.518624   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:07:46.531118   11137 logs.go:123] Gathering logs for kube-apiserver [262c3ed215cb] ...
	I1205 11:07:46.531133   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 262c3ed215cb"
	I1205 11:07:46.545017   11137 logs.go:123] Gathering logs for coredns [8a5ec0469f49] ...
	I1205 11:07:46.545026   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8a5ec0469f49"
	I1205 11:07:46.556531   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:07:46.556543   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:07:46.579356   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:07:46.579368   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:07:46.621706   11137 logs.go:123] Gathering logs for kube-scheduler [40c6e6634eaf] ...
	I1205 11:07:46.621714   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40c6e6634eaf"
	I1205 11:07:46.633492   11137 logs.go:123] Gathering logs for kube-controller-manager [6e40c464d81d] ...
	I1205 11:07:46.633503   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e40c464d81d"
	I1205 11:07:46.648690   11137 logs.go:123] Gathering logs for storage-provisioner [904e682b96d7] ...
	I1205 11:07:46.648700   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 904e682b96d7"
	I1205 11:07:46.660328   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:07:46.660341   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:07:46.694965   11137 logs.go:123] Gathering logs for kube-apiserver [b4ab67c9a319] ...
	I1205 11:07:46.694976   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4ab67c9a319"
	I1205 11:07:46.723234   11137 logs.go:123] Gathering logs for etcd [958667a8aed1] ...
	I1205 11:07:46.723248   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 958667a8aed1"
	I1205 11:07:46.740809   11137 logs.go:123] Gathering logs for etcd [d83e8d46af5a] ...
	I1205 11:07:46.740818   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d83e8d46af5a"
	I1205 11:07:46.755364   11137 logs.go:123] Gathering logs for kube-controller-manager [ee1f0118663e] ...
	I1205 11:07:46.755375   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee1f0118663e"
	I1205 11:07:46.773313   11137 logs.go:123] Gathering logs for storage-provisioner [e5971fc04e44] ...
	I1205 11:07:46.773324   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5971fc04e44"
	I1205 11:07:49.286933   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:07:45.939910   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:07:45.939942   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:07:54.288522   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:07:54.288649   11137 kubeadm.go:597] duration metric: took 4m4.503536375s to restartPrimaryControlPlane
	W1205 11:07:54.288740   11137 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 11:07:54.288788   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1205 11:07:55.289060   11137 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.000269792s)
	I1205 11:07:55.289149   11137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 11:07:55.295101   11137 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 11:07:55.298264   11137 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 11:07:55.301176   11137 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 11:07:55.301183   11137 kubeadm.go:157] found existing configuration files:
	
	I1205 11:07:55.301214   11137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51775 /etc/kubernetes/admin.conf
	I1205 11:07:55.303839   11137 kubeadm.go:163] "https://control-plane.minikube.internal:51775" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51775 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 11:07:55.303875   11137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 11:07:55.307274   11137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51775 /etc/kubernetes/kubelet.conf
	I1205 11:07:55.310499   11137 kubeadm.go:163] "https://control-plane.minikube.internal:51775" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51775 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 11:07:55.310526   11137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 11:07:55.313205   11137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51775 /etc/kubernetes/controller-manager.conf
	I1205 11:07:55.315744   11137 kubeadm.go:163] "https://control-plane.minikube.internal:51775" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51775 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 11:07:55.315780   11137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 11:07:55.318994   11137 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51775 /etc/kubernetes/scheduler.conf
	I1205 11:07:55.321938   11137 kubeadm.go:163] "https://control-plane.minikube.internal:51775" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51775 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 11:07:55.321970   11137 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 11:07:55.324576   11137 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 11:07:55.341290   11137 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1205 11:07:55.341329   11137 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 11:07:55.399472   11137 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 11:07:55.399528   11137 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 11:07:55.399583   11137 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 11:07:55.448901   11137 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 11:07:50.940948   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:07:50.940992   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:07:55.452080   11137 out.go:235]   - Generating certificates and keys ...
	I1205 11:07:55.452113   11137 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 11:07:55.452146   11137 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 11:07:55.452194   11137 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 11:07:55.452235   11137 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 11:07:55.452275   11137 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 11:07:55.452305   11137 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 11:07:55.452341   11137 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 11:07:55.452379   11137 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 11:07:55.452420   11137 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 11:07:55.452460   11137 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 11:07:55.452481   11137 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 11:07:55.452509   11137 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 11:07:55.658905   11137 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 11:07:55.745409   11137 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 11:07:55.842588   11137 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 11:07:56.002390   11137 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 11:07:56.032349   11137 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 11:07:56.032758   11137 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 11:07:56.032851   11137 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 11:07:56.127252   11137 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 11:07:56.131259   11137 out.go:235]   - Booting up control plane ...
	I1205 11:07:56.131308   11137 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 11:07:56.131359   11137 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 11:07:56.131394   11137 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 11:07:56.132628   11137 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 11:07:56.133317   11137 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 11:07:55.943041   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:07:55.943069   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:08:01.135462   11137 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.001975 seconds
	I1205 11:08:01.135559   11137 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 11:08:01.141353   11137 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 11:08:01.651262   11137 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 11:08:01.651370   11137 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-829000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 11:08:02.156937   11137 kubeadm.go:310] [bootstrap-token] Using token: gxdypa.k5ak3nnpbvxiuq31
	I1205 11:08:02.163417   11137 out.go:235]   - Configuring RBAC rules ...
	I1205 11:08:02.163507   11137 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 11:08:02.166343   11137 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 11:08:02.174526   11137 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 11:08:02.175574   11137 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 11:08:02.176500   11137 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 11:08:02.177454   11137 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 11:08:02.180876   11137 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 11:08:02.343533   11137 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 11:08:02.568875   11137 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 11:08:02.569400   11137 kubeadm.go:310] 
	I1205 11:08:02.569433   11137 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 11:08:02.569437   11137 kubeadm.go:310] 
	I1205 11:08:02.569480   11137 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 11:08:02.569485   11137 kubeadm.go:310] 
	I1205 11:08:02.569499   11137 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 11:08:02.569537   11137 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 11:08:02.569563   11137 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 11:08:02.569568   11137 kubeadm.go:310] 
	I1205 11:08:02.569603   11137 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 11:08:02.569611   11137 kubeadm.go:310] 
	I1205 11:08:02.569636   11137 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 11:08:02.569640   11137 kubeadm.go:310] 
	I1205 11:08:02.569666   11137 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 11:08:02.569722   11137 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 11:08:02.569779   11137 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 11:08:02.569784   11137 kubeadm.go:310] 
	I1205 11:08:02.569824   11137 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 11:08:02.569876   11137 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 11:08:02.569880   11137 kubeadm.go:310] 
	I1205 11:08:02.569922   11137 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token gxdypa.k5ak3nnpbvxiuq31 \
	I1205 11:08:02.569973   11137 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:88a7dd8c9efc476dd67085474405097045d2c1786f9e8e2a034455d9e105c30a \
	I1205 11:08:02.569985   11137 kubeadm.go:310] 	--control-plane 
	I1205 11:08:02.569987   11137 kubeadm.go:310] 
	I1205 11:08:02.570067   11137 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 11:08:02.570073   11137 kubeadm.go:310] 
	I1205 11:08:02.570120   11137 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token gxdypa.k5ak3nnpbvxiuq31 \
	I1205 11:08:02.570177   11137 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:88a7dd8c9efc476dd67085474405097045d2c1786f9e8e2a034455d9e105c30a 
	I1205 11:08:02.570238   11137 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 11:08:02.570246   11137 cni.go:84] Creating CNI manager for ""
	I1205 11:08:02.570253   11137 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:08:02.576404   11137 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 11:08:02.584438   11137 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 11:08:02.587536   11137 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 11:08:02.592404   11137 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 11:08:02.592461   11137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 11:08:02.592479   11137 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-829000 minikube.k8s.io/updated_at=2024_12_05T11_08_02_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331 minikube.k8s.io/name=running-upgrade-829000 minikube.k8s.io/primary=true
	I1205 11:08:02.635264   11137 ops.go:34] apiserver oom_adj: -16
	I1205 11:08:02.635266   11137 kubeadm.go:1113] duration metric: took 42.850708ms to wait for elevateKubeSystemPrivileges
	I1205 11:08:02.635279   11137 kubeadm.go:394] duration metric: took 4m12.864185541s to StartCluster
	I1205 11:08:02.635293   11137 settings.go:142] acquiring lock: {Name:mk685c3b4b58f394644fceb0edca00785ff86d9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:08:02.635475   11137 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 11:08:02.635890   11137 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20052-8600/kubeconfig: {Name:mkb6577356fc2312bf9b329fd967969d2d30b8a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:08:02.636104   11137 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:08:02.636116   11137 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 11:08:02.636154   11137 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-829000"
	I1205 11:08:02.636163   11137 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-829000"
	W1205 11:08:02.636166   11137 addons.go:243] addon storage-provisioner should already be in state true
	I1205 11:08:02.636179   11137 host.go:66] Checking if "running-upgrade-829000" exists ...
	I1205 11:08:02.636206   11137 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-829000"
	I1205 11:08:02.636228   11137 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-829000"
	I1205 11:08:02.636315   11137 config.go:182] Loaded profile config "running-upgrade-829000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1205 11:08:02.637244   11137 kapi.go:59] client config for running-upgrade-829000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/running-upgrade-829000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/running-upgrade-829000/client.key", CAFile:"/Users/jenkins/minikube-integration/20052-8600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102197740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 11:08:02.637690   11137 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-829000"
	W1205 11:08:02.637695   11137 addons.go:243] addon default-storageclass should already be in state true
	I1205 11:08:02.637707   11137 host.go:66] Checking if "running-upgrade-829000" exists ...
	I1205 11:08:02.640457   11137 out.go:177] * Verifying Kubernetes components...
	I1205 11:08:02.640750   11137 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 11:08:02.644890   11137 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 11:08:02.644896   11137 sshutil.go:53] new ssh client: &{IP:localhost Port:51743 SSHKeyPath:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/running-upgrade-829000/id_rsa Username:docker}
	I1205 11:08:02.647403   11137 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 11:08:02.651474   11137 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 11:08:02.655443   11137 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 11:08:02.655449   11137 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 11:08:02.655455   11137 sshutil.go:53] new ssh client: &{IP:localhost Port:51743 SSHKeyPath:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/running-upgrade-829000/id_rsa Username:docker}
	I1205 11:08:02.740607   11137 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 11:08:02.746417   11137 api_server.go:52] waiting for apiserver process to appear ...
	I1205 11:08:02.746474   11137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 11:08:02.750659   11137 api_server.go:72] duration metric: took 114.545042ms to wait for apiserver process to appear ...
	I1205 11:08:02.750667   11137 api_server.go:88] waiting for apiserver healthz status ...
	I1205 11:08:02.750674   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:08:02.777738   11137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 11:08:02.799889   11137 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 11:08:03.104754   11137 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1205 11:08:03.104765   11137 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1205 11:08:00.945020   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:08:00.945049   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:08:07.752729   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:08:07.752782   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:08:05.947248   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:08:05.947288   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:08:12.753070   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:08:12.753111   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:08:10.949574   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:08:10.949612   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:08:17.753535   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:08:17.753560   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:08:15.951780   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:08:15.951947   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:08:15.966996   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:08:15.967089   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:08:15.984901   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:08:15.984976   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:08:16.000677   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:08:16.000762   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:08:16.011795   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:08:16.011880   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:08:16.021907   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:08:16.021986   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:08:16.032949   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:08:16.033030   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:08:16.043115   11277 logs.go:282] 0 containers: []
	W1205 11:08:16.043127   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:08:16.043194   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:08:16.053240   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:08:16.053266   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:08:16.053272   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:08:16.068701   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:08:16.068716   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:08:16.086251   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:08:16.086266   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:08:16.098392   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:08:16.098402   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:08:16.136960   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:08:16.136970   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:08:16.249812   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:08:16.249825   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:08:16.265819   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:08:16.265831   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:08:16.282641   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:08:16.282653   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:08:16.313692   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:08:16.313713   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:08:16.333469   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:08:16.333483   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:08:16.346792   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:08:16.346804   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:08:16.359175   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:08:16.359192   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:08:16.384925   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:08:16.384945   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:08:16.389957   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:08:16.389968   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:08:16.405710   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:08:16.405724   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:08:16.418974   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:08:16.418988   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:08:18.935179   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:08:22.753999   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:08:22.754058   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:08:23.935886   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:08:23.936224   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:08:23.965642   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:08:23.965796   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:08:23.983718   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:08:23.983827   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:08:24.000881   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:08:24.000963   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:08:24.012619   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:08:24.012701   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:08:24.023667   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:08:24.023740   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:08:24.034018   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:08:24.034092   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:08:24.044890   11277 logs.go:282] 0 containers: []
	W1205 11:08:24.044900   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:08:24.044962   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:08:24.055874   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:08:24.055896   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:08:24.055902   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:08:24.081625   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:08:24.081635   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:08:24.096139   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:08:24.096148   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:08:24.108994   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:08:24.109008   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:08:24.147385   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:08:24.147393   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:08:24.151246   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:08:24.151254   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:08:24.165105   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:08:24.165118   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:08:24.179988   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:08:24.179999   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:08:24.191088   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:08:24.191100   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:08:24.202548   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:08:24.202560   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:08:24.214488   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:08:24.214499   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:08:24.249742   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:08:24.249752   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:08:24.263557   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:08:24.263566   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:08:24.288132   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:08:24.288142   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:08:24.299607   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:08:24.299618   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:08:24.323133   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:08:24.323145   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:08:27.754859   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:08:27.754884   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:08:26.838005   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:08:32.755726   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:08:32.755763   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1205 11:08:33.106919   11137 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1205 11:08:33.118191   11137 out.go:177] * Enabled addons: storage-provisioner
	I1205 11:08:33.127113   11137 addons.go:510] duration metric: took 30.491318667s for enable addons: enabled=[storage-provisioner]
	I1205 11:08:31.840267   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:08:31.840586   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:08:31.869908   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:08:31.870052   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:08:31.888014   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:08:31.888128   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:08:31.901356   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:08:31.901446   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:08:31.912532   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:08:31.912606   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:08:31.922901   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:08:31.922982   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:08:31.933806   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:08:31.933892   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:08:31.943919   11277 logs.go:282] 0 containers: []
	W1205 11:08:31.943932   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:08:31.943993   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:08:31.958272   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:08:31.958289   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:08:31.958294   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:08:31.962724   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:08:31.962730   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:08:31.976552   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:08:31.976563   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:08:31.991312   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:08:31.991324   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:08:32.029947   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:08:32.029962   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:08:32.062023   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:08:32.062034   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:08:32.073543   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:08:32.073554   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:08:32.090437   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:08:32.090446   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:08:32.103243   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:08:32.103254   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:08:32.115063   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:08:32.115074   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:08:32.140174   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:08:32.140185   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:08:32.152001   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:08:32.152019   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:08:32.191475   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:08:32.191487   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:08:32.205558   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:08:32.205571   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:08:32.216896   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:08:32.216907   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:08:32.228479   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:08:32.228490   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:08:34.745191   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:08:37.756815   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:08:37.756837   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:08:39.745873   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:08:39.746027   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:08:39.756930   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:08:39.757013   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:08:39.767744   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:08:39.767827   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:08:39.778117   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:08:39.778196   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:08:39.788422   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:08:39.788508   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:08:39.803396   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:08:39.803471   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:08:39.814060   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:08:39.814134   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:08:39.824643   11277 logs.go:282] 0 containers: []
	W1205 11:08:39.824655   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:08:39.824712   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:08:39.836250   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:08:39.836269   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:08:39.836275   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:08:39.874531   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:08:39.874542   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:08:39.888741   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:08:39.888755   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:08:39.903842   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:08:39.903854   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:08:39.916931   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:08:39.916941   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:08:39.941112   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:08:39.941120   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:08:39.953204   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:08:39.953215   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:08:39.966705   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:08:39.966719   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:08:39.984071   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:08:39.984080   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:08:39.988090   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:08:39.988097   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:08:40.027074   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:08:40.027085   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:08:40.039233   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:08:40.039244   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:08:40.054318   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:08:40.054329   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:08:40.069516   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:08:40.069527   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:08:40.096123   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:08:40.096134   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:08:40.110744   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:08:40.110754   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:08:42.758153   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:08:42.758217   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:08:42.632086   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:08:47.760374   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:08:47.760392   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:08:47.634434   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:08:47.634737   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:08:47.658400   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:08:47.658517   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:08:47.674620   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:08:47.674708   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:08:47.699918   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:08:47.699999   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:08:47.710872   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:08:47.710963   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:08:47.721869   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:08:47.721944   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:08:47.732923   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:08:47.733004   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:08:47.743023   11277 logs.go:282] 0 containers: []
	W1205 11:08:47.743034   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:08:47.743097   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:08:47.753764   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:08:47.753784   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:08:47.753790   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:08:47.779669   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:08:47.779680   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:08:47.791428   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:08:47.791442   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:08:47.827279   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:08:47.827287   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:08:47.850165   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:08:47.850175   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:08:47.867351   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:08:47.867361   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:08:47.892396   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:08:47.892402   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:08:47.896383   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:08:47.896391   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:08:47.916578   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:08:47.916588   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:08:47.931315   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:08:47.931327   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:08:47.946078   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:08:47.946090   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:08:47.958053   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:08:47.958064   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:08:47.994790   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:08:47.994801   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:08:48.012537   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:08:48.012548   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:08:48.023980   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:08:48.023989   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:08:48.036577   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:08:48.036589   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:08:50.551094   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:08:52.762527   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:08:52.762566   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:08:55.553458   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:08:55.553665   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:08:55.568373   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:08:55.568458   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:08:55.579716   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:08:55.579798   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:08:55.590107   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:08:55.590185   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:08:55.600831   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:08:55.600912   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:08:55.611175   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:08:55.611257   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:08:55.621743   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:08:55.621825   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:08:55.631952   11277 logs.go:282] 0 containers: []
	W1205 11:08:55.631965   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:08:55.632031   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:08:55.642107   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:08:55.642124   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:08:55.642129   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:08:55.679403   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:08:55.679412   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:08:55.683429   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:08:55.683434   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:08:55.705025   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:08:55.705037   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:08:55.716817   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:08:55.716827   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:08:55.731642   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:08:55.731653   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:08:55.749000   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:08:55.749010   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:08:55.761432   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:08:55.761444   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:08:55.787035   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:08:55.787046   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:08:55.798330   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:08:55.798341   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:08:57.764811   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:08:57.764869   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:08:55.832561   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:08:55.832572   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:08:55.846546   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:08:55.846558   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:08:55.867280   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:08:55.867290   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:08:55.879132   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:08:55.879143   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:08:55.892328   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:08:55.892338   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:08:55.904048   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:08:55.904058   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:08:58.429702   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:09:02.767158   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:09:02.767269   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:09:02.781569   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:09:02.781664   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:09:02.792657   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:09:02.792738   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:09:02.803171   11137 logs.go:282] 2 containers: [bd886f7b8aaf 03eca12adf82]
	I1205 11:09:02.803257   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:09:02.814025   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:09:02.814095   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:09:02.824999   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:09:02.825066   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:09:02.835811   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:09:02.835893   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:09:02.845904   11137 logs.go:282] 0 containers: []
	W1205 11:09:02.845915   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:09:02.845982   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:09:02.860222   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:09:02.860240   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:09:02.860245   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:09:02.885192   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:09:02.885204   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:09:02.899299   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:09:02.899311   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:09:02.911353   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:09:02.911364   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:09:02.928882   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:09:02.928892   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:09:02.940480   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:09:02.940491   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:09:02.952278   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:09:02.952288   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:09:02.972389   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:09:02.972401   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:09:02.984003   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:09:02.984014   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:09:02.995315   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:09:02.995329   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:09:03.031815   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:09:03.031826   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:09:03.036573   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:09:03.036581   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:09:03.071284   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:09:03.071295   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:09:03.432332   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:09:03.432490   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:09:03.443978   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:09:03.444072   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:09:03.454424   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:09:03.454515   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:09:03.465393   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:09:03.465468   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:09:03.475748   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:09:03.475826   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:09:03.486521   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:09:03.486602   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:09:03.497784   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:09:03.497856   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:09:03.507577   11277 logs.go:282] 0 containers: []
	W1205 11:09:03.507589   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:09:03.507652   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:09:03.518587   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:09:03.518605   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:09:03.518611   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:09:03.557262   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:09:03.557275   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:09:03.569188   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:09:03.569200   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:09:03.588402   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:09:03.588414   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:09:03.602068   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:09:03.602080   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:09:03.617604   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:09:03.617614   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:09:03.632537   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:09:03.632548   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:09:03.647929   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:09:03.647942   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:09:03.652287   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:09:03.652293   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:09:03.690534   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:09:03.690546   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:09:03.704979   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:09:03.704991   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:09:03.716934   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:09:03.716949   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:09:03.736742   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:09:03.736752   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:09:03.762626   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:09:03.762642   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:09:03.801929   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:09:03.801946   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:09:03.813797   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:09:03.813810   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:09:05.588032   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:09:06.327743   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:09:10.590298   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:09:10.590579   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:09:10.612779   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:09:10.612919   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:09:10.632244   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:09:10.632335   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:09:10.644475   11137 logs.go:282] 2 containers: [bd886f7b8aaf 03eca12adf82]
	I1205 11:09:10.644553   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:09:10.655321   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:09:10.655402   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:09:10.666077   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:09:10.666158   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:09:10.676434   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:09:10.676515   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:09:10.686910   11137 logs.go:282] 0 containers: []
	W1205 11:09:10.686920   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:09:10.686982   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:09:10.697793   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:09:10.697810   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:09:10.697816   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:09:10.712011   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:09:10.712024   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:09:10.723381   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:09:10.723392   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:09:10.748709   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:09:10.748715   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:09:10.783662   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:09:10.783674   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:09:10.802703   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:09:10.802714   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:09:10.814859   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:09:10.814870   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:09:10.827642   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:09:10.827658   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:09:10.840008   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:09:10.840017   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:09:10.857396   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:09:10.857406   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:09:10.869719   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:09:10.869735   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:09:10.904909   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:09:10.904919   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:09:10.909621   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:09:10.909632   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:09:13.426071   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:09:11.330443   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:09:11.330637   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:09:11.343807   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:09:11.343902   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:09:11.354910   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:09:11.354993   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:09:11.365780   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:09:11.365857   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:09:11.376495   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:09:11.376581   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:09:11.386978   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:09:11.387064   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:09:11.397385   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:09:11.397463   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:09:11.408004   11277 logs.go:282] 0 containers: []
	W1205 11:09:11.408017   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:09:11.408085   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:09:11.418649   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:09:11.418666   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:09:11.418673   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:09:11.460242   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:09:11.460251   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:09:11.474345   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:09:11.474358   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:09:11.491562   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:09:11.491571   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:09:11.495862   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:09:11.495867   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:09:11.526743   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:09:11.526766   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:09:11.541377   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:09:11.541388   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:09:11.554422   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:09:11.554433   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:09:11.569384   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:09:11.569393   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:09:11.592391   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:09:11.592402   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:09:11.617369   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:09:11.617378   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:09:11.656086   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:09:11.656098   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:09:11.667628   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:09:11.667639   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:09:11.679461   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:09:11.679474   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:09:11.692049   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:09:11.692059   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:09:11.706589   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:09:11.706600   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:09:14.220544   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:09:18.428418   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:09:18.428652   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:09:18.451945   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:09:18.452053   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:09:18.466876   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:09:18.466954   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:09:18.479642   11137 logs.go:282] 2 containers: [bd886f7b8aaf 03eca12adf82]
	I1205 11:09:18.479709   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:09:18.490046   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:09:18.490128   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:09:18.500692   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:09:18.500774   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:09:18.514292   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:09:18.514374   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:09:18.524604   11137 logs.go:282] 0 containers: []
	W1205 11:09:18.524617   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:09:18.524686   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:09:18.535425   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:09:18.535446   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:09:18.535451   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:09:18.549866   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:09:18.549879   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:09:18.562052   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:09:18.562062   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:09:18.582612   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:09:18.582623   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:09:18.616809   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:09:18.616821   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:09:18.650291   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:09:18.650304   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:09:18.664513   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:09:18.664523   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:09:18.680905   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:09:18.680918   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:09:18.692596   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:09:18.692608   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:09:18.717446   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:09:18.717453   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:09:18.721624   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:09:18.721633   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:09:18.735217   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:09:18.735228   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:09:18.746828   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:09:18.746838   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:09:19.223129   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:09:19.223262   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:09:19.234947   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:09:19.235039   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:09:19.246532   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:09:19.246612   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:09:19.257505   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:09:19.257579   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:09:19.268986   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:09:19.269067   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:09:19.284397   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:09:19.284474   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:09:19.295114   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:09:19.295191   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:09:19.305655   11277 logs.go:282] 0 containers: []
	W1205 11:09:19.305667   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:09:19.305733   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:09:19.322043   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:09:19.322066   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:09:19.322075   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:09:19.336211   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:09:19.336223   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:09:19.347684   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:09:19.347694   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:09:19.371042   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:09:19.371055   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:09:19.382639   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:09:19.382649   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:09:19.419656   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:09:19.419672   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:09:19.454364   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:09:19.454376   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:09:19.468453   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:09:19.468463   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:09:19.493647   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:09:19.493660   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:09:19.508590   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:09:19.508600   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:09:19.529591   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:09:19.529603   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:09:19.541707   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:09:19.541718   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:09:19.546626   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:09:19.546633   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:09:19.560829   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:09:19.560842   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:09:19.572056   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:09:19.572069   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:09:19.583574   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:09:19.583584   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:09:21.312877   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:09:22.151210   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:09:26.315165   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:09:26.315402   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:09:26.335428   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:09:26.335528   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:09:26.356186   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:09:26.356273   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:09:26.367749   11137 logs.go:282] 2 containers: [bd886f7b8aaf 03eca12adf82]
	I1205 11:09:26.367826   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:09:26.378241   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:09:26.378317   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:09:26.388478   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:09:26.388565   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:09:26.400967   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:09:26.401044   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:09:26.411526   11137 logs.go:282] 0 containers: []
	W1205 11:09:26.411537   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:09:26.411598   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:09:26.422031   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:09:26.422046   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:09:26.422052   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:09:26.436289   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:09:26.436302   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:09:26.448104   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:09:26.448115   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:09:26.466351   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:09:26.466364   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:09:26.484162   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:09:26.484176   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:09:26.496934   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:09:26.496945   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:09:26.501367   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:09:26.501376   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:09:26.535571   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:09:26.535585   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:09:26.549624   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:09:26.549638   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:09:26.561650   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:09:26.561664   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:09:26.585700   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:09:26.585711   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:09:26.618203   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:09:26.618211   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:09:26.629948   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:09:26.629961   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:09:29.146259   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:09:27.153624   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:09:27.153749   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:09:27.166108   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:09:27.166177   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:09:27.178334   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:09:27.178404   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:09:27.189380   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:09:27.189460   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:09:27.200238   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:09:27.200319   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:09:27.211113   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:09:27.211189   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:09:27.222191   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:09:27.222277   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:09:27.232543   11277 logs.go:282] 0 containers: []
	W1205 11:09:27.232556   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:09:27.232623   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:09:27.247815   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:09:27.247835   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:09:27.247841   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:09:27.283453   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:09:27.283465   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:09:27.297488   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:09:27.297498   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:09:27.322976   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:09:27.322988   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:09:27.336918   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:09:27.336927   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:09:27.349360   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:09:27.349375   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:09:27.385891   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:09:27.385900   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:09:27.402647   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:09:27.402657   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:09:27.416374   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:09:27.416386   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:09:27.440981   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:09:27.440989   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:09:27.455612   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:09:27.455624   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:09:27.467459   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:09:27.467470   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:09:27.479274   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:09:27.479285   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:09:27.483845   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:09:27.483853   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:09:27.498797   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:09:27.498808   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:09:27.510708   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:09:27.510719   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:09:30.025043   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:09:34.148673   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:09:34.148960   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:09:34.172800   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:09:34.172941   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:09:34.189959   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:09:34.190050   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:09:34.203127   11137 logs.go:282] 2 containers: [bd886f7b8aaf 03eca12adf82]
	I1205 11:09:34.203213   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:09:34.214566   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:09:34.214643   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:09:34.225024   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:09:34.225102   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:09:34.235887   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:09:34.235960   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:09:34.246029   11137 logs.go:282] 0 containers: []
	W1205 11:09:34.246042   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:09:34.246111   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:09:34.256210   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:09:34.256226   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:09:34.256232   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:09:34.270817   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:09:34.270831   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:09:34.282706   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:09:34.282719   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:09:34.294572   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:09:34.294582   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:09:34.310147   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:09:34.310157   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:09:34.314722   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:09:34.314732   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:09:34.357147   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:09:34.357161   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:09:34.372447   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:09:34.372460   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:09:34.384259   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:09:34.384270   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:09:34.399484   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:09:34.399495   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:09:34.417315   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:09:34.417328   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:09:34.428432   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:09:34.428443   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:09:34.463932   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:09:34.463941   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:09:35.027372   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:09:35.027641   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:09:35.056614   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:09:35.056741   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:09:35.074148   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:09:35.074239   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:09:35.086113   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:09:35.086200   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:09:35.097093   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:09:35.097173   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:09:35.108025   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:09:35.108113   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:09:35.122935   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:09:35.123010   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:09:35.133639   11277 logs.go:282] 0 containers: []
	W1205 11:09:35.133650   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:09:35.133721   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:09:35.143923   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:09:35.143942   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:09:35.143948   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:09:35.180201   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:09:35.180214   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:09:35.194774   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:09:35.194785   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:09:35.215816   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:09:35.215826   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:09:35.228021   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:09:35.228032   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:09:35.245268   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:09:35.245280   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:09:35.269764   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:09:35.269774   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:09:35.291083   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:09:35.291097   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:09:35.306463   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:09:35.306476   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:09:35.318334   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:09:35.318345   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:09:35.335161   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:09:35.335170   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:09:35.361067   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:09:35.361082   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:09:35.365289   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:09:35.365294   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:09:35.400761   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:09:35.400772   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:09:35.412670   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:09:35.412691   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:09:35.426272   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:09:35.426283   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:09:36.990263   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:09:37.939973   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:09:41.993019   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:09:41.993601   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:09:42.036515   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:09:42.036683   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:09:42.058171   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:09:42.058280   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:09:42.072758   11137 logs.go:282] 2 containers: [bd886f7b8aaf 03eca12adf82]
	I1205 11:09:42.072853   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:09:42.084855   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:09:42.084947   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:09:42.095790   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:09:42.095876   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:09:42.107366   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:09:42.107445   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:09:42.118295   11137 logs.go:282] 0 containers: []
	W1205 11:09:42.118307   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:09:42.118378   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:09:42.129345   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:09:42.129361   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:09:42.129367   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:09:42.153194   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:09:42.153202   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:09:42.165028   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:09:42.165041   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:09:42.177015   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:09:42.177024   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:09:42.188509   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:09:42.188520   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:09:42.200185   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:09:42.200197   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:09:42.214311   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:09:42.214323   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:09:42.232566   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:09:42.232578   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:09:42.244638   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:09:42.244649   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:09:42.259407   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:09:42.259416   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:09:42.276834   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:09:42.276844   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:09:42.310295   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:09:42.310305   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:09:42.314721   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:09:42.314729   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:09:42.942273   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:09:42.942428   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:09:42.955192   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:09:42.955286   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:09:42.966190   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:09:42.966262   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:09:42.976475   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:09:42.976553   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:09:42.987079   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:09:42.987149   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:09:42.997940   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:09:42.998018   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:09:43.012794   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:09:43.012883   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:09:43.024000   11277 logs.go:282] 0 containers: []
	W1205 11:09:43.024012   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:09:43.024075   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:09:43.036048   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:09:43.036069   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:09:43.036075   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:09:43.074512   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:09:43.074523   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:09:43.088890   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:09:43.088900   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:09:43.100051   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:09:43.100062   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:09:43.112129   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:09:43.112140   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:09:43.125327   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:09:43.125337   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:09:43.136832   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:09:43.136842   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:09:43.174260   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:09:43.174272   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:09:43.179290   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:09:43.179298   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:09:43.197039   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:09:43.197051   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:09:43.211980   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:09:43.211992   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:09:43.236987   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:09:43.236996   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:09:43.261295   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:09:43.261305   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:09:43.276902   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:09:43.276913   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:09:43.288421   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:09:43.288433   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:09:43.312632   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:09:43.312644   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:09:45.832963   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:09:44.856680   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:09:50.834129   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:09:50.834334   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:09:50.850597   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:09:50.850703   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:09:49.858966   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:09:49.859163   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:09:49.872860   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:09:49.872958   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:09:49.884376   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:09:49.884458   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:09:49.895411   11137 logs.go:282] 2 containers: [bd886f7b8aaf 03eca12adf82]
	I1205 11:09:49.895487   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:09:49.905600   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:09:49.905677   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:09:49.915760   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:09:49.915836   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:09:49.926420   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:09:49.926488   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:09:49.936585   11137 logs.go:282] 0 containers: []
	W1205 11:09:49.936597   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:09:49.936661   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:09:49.947200   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:09:49.947216   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:09:49.947222   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:09:49.982455   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:09:49.982463   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:09:49.986837   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:09:49.986846   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:09:50.021405   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:09:50.021416   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:09:50.033154   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:09:50.033165   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:09:50.058388   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:09:50.058403   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:09:50.094813   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:09:50.094829   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:09:50.108327   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:09:50.108339   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:09:50.121739   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:09:50.121751   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:09:50.136363   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:09:50.136374   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:09:50.150898   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:09:50.150909   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:09:50.162602   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:09:50.162617   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:09:50.177386   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:09:50.177396   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:09:52.691419   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:09:50.863338   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:09:50.863415   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:09:50.874116   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:09:50.874192   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:09:50.888527   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:09:50.888612   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:09:50.899791   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:09:50.899869   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:09:50.910440   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:09:50.910510   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:09:50.920225   11277 logs.go:282] 0 containers: []
	W1205 11:09:50.920237   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:09:50.920298   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:09:50.931078   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:09:50.931095   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:09:50.931100   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:09:50.942657   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:09:50.942669   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:09:50.955857   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:09:50.955871   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:09:50.970469   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:09:50.970481   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:09:50.982138   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:09:50.982148   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:09:51.006967   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:09:51.006980   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:09:51.019295   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:09:51.019305   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:09:51.037124   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:09:51.037137   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:09:51.061699   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:09:51.061709   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:09:51.066113   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:09:51.066122   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:09:51.106428   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:09:51.106442   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:09:51.120404   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:09:51.120418   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:09:51.137145   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:09:51.137155   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:09:51.152498   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:09:51.152513   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:09:51.164443   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:09:51.164459   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:09:51.202551   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:09:51.202571   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:09:53.719288   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:09:57.693708   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:09:57.694002   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:09:57.719166   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:09:57.719280   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:09:57.735586   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:09:57.735695   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:09:57.749076   11137 logs.go:282] 2 containers: [bd886f7b8aaf 03eca12adf82]
	I1205 11:09:57.749163   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:09:57.760022   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:09:57.760101   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:09:57.770432   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:09:57.770520   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:09:57.780645   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:09:57.780717   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:09:57.794922   11137 logs.go:282] 0 containers: []
	W1205 11:09:57.794935   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:09:57.795003   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:09:57.804971   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:09:57.804987   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:09:57.804993   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:09:57.841864   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:09:57.841876   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:09:57.854669   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:09:57.854683   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:09:57.869342   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:09:57.869356   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:09:57.887116   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:09:57.887126   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:09:57.921870   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:09:57.921879   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:09:57.926437   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:09:57.926446   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:09:57.940551   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:09:57.940562   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:09:57.954679   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:09:57.954692   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:09:57.966321   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:09:57.966334   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:09:57.982020   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:09:57.982030   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:09:57.993373   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:09:57.993384   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:09:58.018513   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:09:58.018524   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:09:58.719761   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:09:58.720020   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:09:58.739565   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:09:58.739678   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:09:58.753717   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:09:58.753805   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:09:58.765964   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:09:58.766047   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:09:58.781214   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:09:58.781294   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:09:58.792019   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:09:58.792102   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:09:58.803445   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:09:58.803524   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:09:58.814142   11277 logs.go:282] 0 containers: []
	W1205 11:09:58.814154   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:09:58.814223   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:09:58.824461   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:09:58.824482   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:09:58.824487   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:09:58.864637   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:09:58.864652   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:09:58.879027   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:09:58.879037   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:09:58.891580   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:09:58.891590   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:09:58.915630   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:09:58.915638   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:09:58.920112   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:09:58.920117   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:09:58.934264   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:09:58.934275   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:09:58.945484   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:09:58.945495   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:09:58.956672   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:09:58.956682   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:09:58.968705   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:09:58.968714   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:09:58.983659   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:09:58.983670   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:09:58.998800   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:09:58.998811   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:09:59.011068   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:09:59.011078   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:09:59.028192   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:09:59.028202   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:09:59.064919   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:09:59.064934   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:09:59.089618   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:09:59.089628   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:10:00.531883   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:10:01.603534   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:10:05.534112   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:10:05.534322   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:10:05.547627   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:10:05.547716   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:10:05.558921   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:10:05.559003   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:10:05.569971   11137 logs.go:282] 2 containers: [bd886f7b8aaf 03eca12adf82]
	I1205 11:10:05.570058   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:10:05.580099   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:10:05.580175   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:10:05.590853   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:10:05.590936   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:10:05.601598   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:10:05.601669   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:10:05.614648   11137 logs.go:282] 0 containers: []
	W1205 11:10:05.614661   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:10:05.614728   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:10:05.629358   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:10:05.629373   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:10:05.629380   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:10:05.641051   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:10:05.641065   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:10:05.664640   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:10:05.664649   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:10:05.676961   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:10:05.676975   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:10:05.681544   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:10:05.681550   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:10:05.701904   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:10:05.701918   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:10:05.716762   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:10:05.716777   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:10:05.729041   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:10:05.729052   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:10:05.744394   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:10:05.744404   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:10:05.756206   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:10:05.756216   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:10:05.774116   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:10:05.774125   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:10:05.788153   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:10:05.788166   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:10:05.821551   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:10:05.821559   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:10:08.358610   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:10:06.605847   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:10:06.606140   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:10:06.631762   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:10:06.631909   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:10:06.648593   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:10:06.648679   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:10:06.662080   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:10:06.662163   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:10:06.673730   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:10:06.673803   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:10:06.684061   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:10:06.684140   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:10:06.694542   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:10:06.694611   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:10:06.704517   11277 logs.go:282] 0 containers: []
	W1205 11:10:06.704530   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:10:06.704598   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:10:06.721159   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:10:06.721177   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:10:06.721184   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:10:06.757646   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:10:06.757657   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:10:06.784268   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:10:06.784280   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:10:06.800304   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:10:06.800319   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:10:06.812105   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:10:06.812117   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:10:06.829645   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:10:06.829655   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:10:06.842598   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:10:06.842608   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:10:06.847672   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:10:06.847678   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:10:06.862242   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:10:06.862251   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:10:06.876664   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:10:06.876678   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:10:06.900471   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:10:06.900479   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:10:06.912269   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:10:06.912280   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:10:06.946287   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:10:06.946302   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:10:06.960395   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:10:06.960405   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:10:06.972303   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:10:06.972316   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:10:06.983710   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:10:06.983721   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:10:09.498597   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:10:13.361082   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:10:13.361422   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:10:13.388671   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:10:13.388813   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:10:13.409888   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:10:13.409993   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:10:13.422359   11137 logs.go:282] 2 containers: [bd886f7b8aaf 03eca12adf82]
	I1205 11:10:13.422445   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:10:13.437222   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:10:13.437295   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:10:13.448465   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:10:13.448541   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:10:13.464166   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:10:13.464247   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:10:13.475471   11137 logs.go:282] 0 containers: []
	W1205 11:10:13.475481   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:10:13.475539   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:10:13.485894   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:10:13.485910   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:10:13.485915   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:10:13.497806   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:10:13.497816   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:10:13.520978   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:10:13.520986   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:10:13.532070   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:10:13.532086   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:10:13.536929   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:10:13.536935   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:10:13.570908   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:10:13.570928   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:10:13.585240   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:10:13.585255   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:10:13.598942   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:10:13.598952   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:10:13.612487   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:10:13.612500   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:10:13.626951   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:10:13.626963   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:10:13.638107   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:10:13.638117   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:10:13.655388   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:10:13.655398   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:10:13.690482   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:10:13.690492   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:10:14.500810   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:10:14.500987   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:10:14.512730   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:10:14.512811   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:10:14.523505   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:10:14.523585   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:10:14.534111   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:10:14.534185   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:10:14.544126   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:10:14.544202   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:10:14.554503   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:10:14.554588   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:10:14.565092   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:10:14.565175   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:10:14.575183   11277 logs.go:282] 0 containers: []
	W1205 11:10:14.575194   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:10:14.575262   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:10:14.585231   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:10:14.585247   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:10:14.585253   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:10:14.596906   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:10:14.596917   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:10:14.610980   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:10:14.610991   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:10:14.652474   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:10:14.652486   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:10:14.684979   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:10:14.684991   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:10:14.702399   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:10:14.702411   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:10:14.716848   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:10:14.716859   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:10:14.728824   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:10:14.728835   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:10:14.743673   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:10:14.743685   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:10:14.755767   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:10:14.755779   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:10:14.760417   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:10:14.760424   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:10:14.774244   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:10:14.774255   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:10:14.785972   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:10:14.785985   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:10:14.798750   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:10:14.798761   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:10:14.823397   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:10:14.823405   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:10:14.862332   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:10:14.862340   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:10:16.204121   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:10:17.378948   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:10:21.206801   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:10:21.207234   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:10:21.234853   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:10:21.235002   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:10:21.252722   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:10:21.252818   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:10:21.267120   11137 logs.go:282] 4 containers: [a9825cec9ee3 995fc5f291bc bd886f7b8aaf 03eca12adf82]
	I1205 11:10:21.267215   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:10:21.279813   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:10:21.279900   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:10:21.290216   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:10:21.290305   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:10:21.307001   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:10:21.307079   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:10:21.316870   11137 logs.go:282] 0 containers: []
	W1205 11:10:21.316883   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:10:21.316951   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:10:21.328204   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:10:21.328223   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:10:21.328229   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:10:21.333397   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:10:21.333404   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:10:21.347527   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:10:21.347538   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:10:21.359230   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:10:21.359241   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:10:21.371265   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:10:21.371276   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:10:21.396511   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:10:21.396522   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:10:21.431067   11137 logs.go:123] Gathering logs for coredns [995fc5f291bc] ...
	I1205 11:10:21.431078   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995fc5f291bc"
	I1205 11:10:21.446622   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:10:21.446634   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:10:21.463161   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:10:21.463175   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:10:21.488694   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:10:21.488705   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:10:21.501633   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:10:21.501644   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:10:21.513610   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:10:21.513622   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:10:21.548863   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:10:21.548875   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:10:21.563899   11137 logs.go:123] Gathering logs for coredns [a9825cec9ee3] ...
	I1205 11:10:21.563910   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9825cec9ee3"
	I1205 11:10:21.575965   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:10:21.575978   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:10:24.090561   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:10:22.381335   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:10:22.381553   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:10:22.398790   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:10:22.398895   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:10:22.412679   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:10:22.412764   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:10:22.424431   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:10:22.424516   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:10:22.435465   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:10:22.435552   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:10:22.445942   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:10:22.446014   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:10:22.456568   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:10:22.456644   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:10:22.466047   11277 logs.go:282] 0 containers: []
	W1205 11:10:22.466059   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:10:22.466128   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:10:22.476283   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:10:22.476300   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:10:22.476305   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:10:22.514893   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:10:22.514904   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:10:22.519276   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:10:22.519282   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:10:22.531380   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:10:22.531394   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:10:22.545062   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:10:22.545072   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:10:22.557254   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:10:22.557263   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:10:22.579875   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:10:22.579884   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:10:22.615257   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:10:22.615271   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:10:22.630018   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:10:22.630028   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:10:22.650959   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:10:22.650976   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:10:22.676388   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:10:22.676401   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:10:22.692192   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:10:22.692207   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:10:22.707157   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:10:22.707172   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:10:22.721219   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:10:22.721235   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:10:22.732816   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:10:22.732827   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:10:22.744218   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:10:22.744229   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:10:25.258001   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:10:29.091434   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:10:29.091910   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:10:29.123943   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:10:29.124102   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:10:29.143516   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:10:29.143627   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:10:29.158463   11137 logs.go:282] 4 containers: [a9825cec9ee3 995fc5f291bc bd886f7b8aaf 03eca12adf82]
	I1205 11:10:29.158557   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:10:29.170880   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:10:29.170968   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:10:29.182431   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:10:29.182509   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:10:29.193133   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:10:29.193201   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:10:29.203481   11137 logs.go:282] 0 containers: []
	W1205 11:10:29.203492   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:10:29.203559   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:10:29.219565   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:10:29.219581   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:10:29.219587   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:10:29.252804   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:10:29.252812   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:10:29.267742   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:10:29.267755   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:10:29.282807   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:10:29.282817   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:10:29.298532   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:10:29.298543   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:10:29.310926   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:10:29.310936   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:10:29.332622   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:10:29.332635   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:10:29.357697   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:10:29.357711   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:10:29.421437   11137 logs.go:123] Gathering logs for coredns [a9825cec9ee3] ...
	I1205 11:10:29.421449   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9825cec9ee3"
	I1205 11:10:29.433643   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:10:29.433655   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:10:29.445791   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:10:29.445802   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:10:29.467501   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:10:29.467511   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:10:29.479034   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:10:29.479048   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:10:29.491760   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:10:29.491773   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:10:29.496464   11137 logs.go:123] Gathering logs for coredns [995fc5f291bc] ...
	I1205 11:10:29.496475   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995fc5f291bc"
	I1205 11:10:30.259696   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:10:30.259913   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:10:30.284096   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:10:30.284205   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:10:30.299017   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:10:30.299109   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:10:30.312564   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:10:30.312635   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:10:30.323885   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:10:30.323960   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:10:30.349573   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:10:30.349652   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:10:30.369457   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:10:30.369541   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:10:30.384064   11277 logs.go:282] 0 containers: []
	W1205 11:10:30.384080   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:10:30.384156   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:10:30.394315   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:10:30.394333   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:10:30.394339   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:10:30.433052   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:10:30.433064   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:10:30.447674   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:10:30.447682   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:10:30.458932   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:10:30.458943   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:10:30.478257   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:10:30.478269   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:10:30.492416   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:10:30.492431   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:10:30.521390   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:10:30.521404   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:10:30.535692   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:10:30.535704   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:10:30.547738   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:10:30.547747   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:10:30.562303   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:10:30.562313   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:10:30.566822   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:10:30.566829   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:10:30.589086   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:10:30.589099   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:10:30.602530   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:10:30.602540   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:10:30.614641   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:10:30.614653   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:10:30.652174   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:10:30.652180   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:10:30.664830   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:10:30.664839   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:10:32.010802   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:10:33.190254   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:10:37.013249   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:10:37.013781   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:10:37.053569   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:10:37.053731   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:10:37.075995   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:10:37.076121   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:10:37.091884   11137 logs.go:282] 4 containers: [a9825cec9ee3 995fc5f291bc bd886f7b8aaf 03eca12adf82]
	I1205 11:10:37.091974   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:10:37.105180   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:10:37.105264   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:10:37.117037   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:10:37.117119   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:10:37.127989   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:10:37.128064   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:10:37.138748   11137 logs.go:282] 0 containers: []
	W1205 11:10:37.138759   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:10:37.138830   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:10:37.149354   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:10:37.149373   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:10:37.149379   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:10:37.167253   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:10:37.167264   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:10:37.182672   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:10:37.182683   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:10:37.194334   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:10:37.194344   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:10:37.208409   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:10:37.208420   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:10:37.232780   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:10:37.232788   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:10:37.267000   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:10:37.267011   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:10:37.281412   11137 logs.go:123] Gathering logs for coredns [995fc5f291bc] ...
	I1205 11:10:37.281425   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995fc5f291bc"
	I1205 11:10:37.292995   11137 logs.go:123] Gathering logs for coredns [a9825cec9ee3] ...
	I1205 11:10:37.293007   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9825cec9ee3"
	I1205 11:10:37.304575   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:10:37.304587   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:10:37.316168   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:10:37.316179   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:10:37.350357   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:10:37.350365   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:10:37.354544   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:10:37.354553   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:10:37.369097   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:10:37.369109   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:10:37.385109   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:10:37.385119   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:10:38.192751   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:10:38.193269   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:10:38.232611   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:10:38.232758   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:10:38.252813   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:10:38.252927   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:10:38.267550   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:10:38.267639   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:10:38.280006   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:10:38.280092   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:10:38.290530   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:10:38.290607   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:10:38.301660   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:10:38.301743   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:10:38.315093   11277 logs.go:282] 0 containers: []
	W1205 11:10:38.315103   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:10:38.315165   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:10:38.330422   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:10:38.330439   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:10:38.330446   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:10:38.349034   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:10:38.349043   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:10:38.364179   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:10:38.364191   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:10:38.376298   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:10:38.376310   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:10:38.401668   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:10:38.401684   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:10:38.447052   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:10:38.447068   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:10:38.473232   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:10:38.473246   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:10:38.484807   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:10:38.484820   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:10:38.496732   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:10:38.496742   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:10:38.520328   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:10:38.520338   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:10:38.534374   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:10:38.534387   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:10:38.548570   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:10:38.548580   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:10:38.566468   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:10:38.566478   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:10:38.571104   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:10:38.571114   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:10:38.607358   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:10:38.607372   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:10:38.619742   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:10:38.619753   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:10:39.903101   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:10:41.133973   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:10:44.905501   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:10:44.905725   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:10:44.919227   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:10:44.919321   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:10:44.931087   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:10:44.931166   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:10:44.942042   11137 logs.go:282] 4 containers: [a9825cec9ee3 995fc5f291bc bd886f7b8aaf 03eca12adf82]
	I1205 11:10:44.942124   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:10:44.954667   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:10:44.954743   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:10:44.969447   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:10:44.969527   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:10:44.979875   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:10:44.979946   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:10:44.990403   11137 logs.go:282] 0 containers: []
	W1205 11:10:44.990415   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:10:44.990484   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:10:45.002642   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:10:45.002659   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:10:45.002665   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:10:45.037126   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:10:45.037135   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:10:45.051533   11137 logs.go:123] Gathering logs for coredns [a9825cec9ee3] ...
	I1205 11:10:45.051546   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9825cec9ee3"
	I1205 11:10:45.064464   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:10:45.064477   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:10:45.078910   11137 logs.go:123] Gathering logs for coredns [995fc5f291bc] ...
	I1205 11:10:45.078922   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995fc5f291bc"
	I1205 11:10:45.092405   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:10:45.092417   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:10:45.107241   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:10:45.107251   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:10:45.126158   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:10:45.126171   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:10:45.130584   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:10:45.130591   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:10:45.165097   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:10:45.165109   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:10:45.190434   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:10:45.190444   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:10:45.201674   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:10:45.201685   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:10:45.213763   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:10:45.213776   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:10:45.230408   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:10:45.230418   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:10:45.243372   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:10:45.243435   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:10:47.758397   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:10:46.135698   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:10:46.135868   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:10:46.149371   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:10:46.149463   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:10:46.160381   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:10:46.160467   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:10:46.171214   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:10:46.171291   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:10:46.181460   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:10:46.181541   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:10:46.191861   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:10:46.191940   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:10:46.202696   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:10:46.202767   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:10:46.212879   11277 logs.go:282] 0 containers: []
	W1205 11:10:46.212897   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:10:46.212964   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:10:46.223728   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:10:46.223744   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:10:46.223749   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:10:46.237969   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:10:46.237984   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:10:46.249791   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:10:46.249802   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:10:46.288114   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:10:46.288127   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:10:46.322214   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:10:46.322228   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:10:46.337037   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:10:46.337050   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:10:46.348504   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:10:46.348518   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:10:46.352544   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:10:46.352553   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:10:46.364387   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:10:46.364400   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:10:46.379298   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:10:46.379310   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:10:46.390795   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:10:46.390806   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:10:46.414371   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:10:46.414388   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:10:46.427098   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:10:46.427108   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:10:46.454473   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:10:46.454487   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:10:46.470765   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:10:46.470776   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:10:46.489622   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:10:46.489632   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:10:49.005482   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:10:52.760707   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:10:52.760844   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:10:52.778856   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:10:52.778939   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:10:52.788798   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:10:52.788881   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:10:52.800862   11137 logs.go:282] 4 containers: [a9825cec9ee3 995fc5f291bc bd886f7b8aaf 03eca12adf82]
	I1205 11:10:52.800944   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:10:52.815831   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:10:52.815912   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:10:52.826836   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:10:52.826921   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:10:52.837376   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:10:52.837451   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:10:52.848073   11137 logs.go:282] 0 containers: []
	W1205 11:10:52.848084   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:10:52.848152   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:10:52.859214   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:10:52.859231   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:10:52.859237   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:10:52.870767   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:10:52.870781   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:10:52.882708   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:10:52.882719   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:10:52.900817   11137 logs.go:123] Gathering logs for coredns [995fc5f291bc] ...
	I1205 11:10:52.900827   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995fc5f291bc"
	I1205 11:10:52.912858   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:10:52.912869   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:10:52.924819   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:10:52.924829   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:10:52.960434   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:10:52.960443   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:10:52.965433   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:10:52.965438   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:10:53.001879   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:10:53.001893   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:10:53.016789   11137 logs.go:123] Gathering logs for coredns [a9825cec9ee3] ...
	I1205 11:10:53.016801   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9825cec9ee3"
	I1205 11:10:53.028747   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:10:53.028759   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:10:53.040211   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:10:53.040223   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:10:53.054431   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:10:53.054441   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:10:53.069425   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:10:53.069435   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:10:53.094523   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:10:53.094531   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:10:54.008150   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:10:54.008310   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:10:54.021067   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:10:54.021156   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:10:54.032329   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:10:54.032407   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:10:54.043496   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:10:54.043562   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:10:54.055615   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:10:54.055694   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:10:54.066326   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:10:54.066405   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:10:54.077354   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:10:54.077431   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:10:54.087747   11277 logs.go:282] 0 containers: []
	W1205 11:10:54.087762   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:10:54.087827   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:10:54.102008   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:10:54.102025   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:10:54.102031   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:10:54.124956   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:10:54.124964   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:10:54.138699   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:10:54.138709   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:10:54.150224   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:10:54.150237   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:10:54.166401   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:10:54.166411   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:10:54.178102   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:10:54.178114   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:10:54.196132   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:10:54.196146   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:10:54.208149   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:10:54.208159   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:10:54.212709   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:10:54.212716   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:10:54.248507   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:10:54.248518   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:10:54.273641   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:10:54.273652   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:10:54.288057   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:10:54.288066   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:10:54.300747   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:10:54.300759   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:10:54.315416   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:10:54.315427   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:10:54.339328   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:10:54.339337   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:10:54.355614   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:10:54.355624   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:10:55.608052   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:10:56.895898   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:11:00.610359   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:11:00.610498   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:11:00.623131   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:11:00.623220   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:11:00.634731   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:11:00.634810   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:11:00.645322   11137 logs.go:282] 4 containers: [a9825cec9ee3 995fc5f291bc bd886f7b8aaf 03eca12adf82]
	I1205 11:11:00.645409   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:11:00.656141   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:11:00.656220   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:11:00.666909   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:11:00.666987   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:11:00.676979   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:11:00.677053   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:11:00.687753   11137 logs.go:282] 0 containers: []
	W1205 11:11:00.687764   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:11:00.687832   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:11:00.697939   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:11:00.697958   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:11:00.697965   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:11:00.733698   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:11:00.733710   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:11:00.751298   11137 logs.go:123] Gathering logs for coredns [a9825cec9ee3] ...
	I1205 11:11:00.751309   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9825cec9ee3"
	I1205 11:11:00.763314   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:11:00.763325   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:11:00.780071   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:11:00.780081   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:11:00.791798   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:11:00.791809   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:11:00.806564   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:11:00.806578   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:11:00.818424   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:11:00.818434   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:11:00.829729   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:11:00.829740   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:11:00.834410   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:11:00.834417   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:11:00.871570   11137 logs.go:123] Gathering logs for coredns [995fc5f291bc] ...
	I1205 11:11:00.871581   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995fc5f291bc"
	I1205 11:11:00.887645   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:11:00.887657   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:11:00.899188   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:11:00.899203   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:11:00.911853   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:11:00.911867   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:11:00.937025   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:11:00.937034   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:11:03.457895   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:11:01.898358   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:11:01.898600   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:11:01.920823   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:11:01.920960   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:11:01.937137   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:11:01.937226   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:11:01.949894   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:11:01.949977   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:11:01.961058   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:11:01.961139   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:11:01.972602   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:11:01.972681   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:11:01.983474   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:11:01.983555   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:11:01.993880   11277 logs.go:282] 0 containers: []
	W1205 11:11:01.993893   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:11:01.993965   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:11:02.004264   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:11:02.004284   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:11:02.004291   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:11:02.018101   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:11:02.018112   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:11:02.032118   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:11:02.032129   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:11:02.043735   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:11:02.043745   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:11:02.082815   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:11:02.082828   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:11:02.087649   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:11:02.087657   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:11:02.122734   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:11:02.122745   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:11:02.136680   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:11:02.136690   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:11:02.151927   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:11:02.151938   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:11:02.168166   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:11:02.168175   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:11:02.191937   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:11:02.191949   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:11:02.203842   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:11:02.203852   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:11:02.229855   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:11:02.229865   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:11:02.241304   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:11:02.241315   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:11:02.253346   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:11:02.253361   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:11:02.273277   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:11:02.273286   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:11:04.788169   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:11:08.460313   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:11:08.460593   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:11:08.483279   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:11:08.483413   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:11:08.501323   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:11:08.501418   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:11:08.514375   11137 logs.go:282] 4 containers: [a9825cec9ee3 995fc5f291bc bd886f7b8aaf 03eca12adf82]
	I1205 11:11:08.514462   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:11:08.525467   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:11:08.525546   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:11:08.535830   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:11:08.535912   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:11:08.546389   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:11:08.546460   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:11:08.556907   11137 logs.go:282] 0 containers: []
	W1205 11:11:08.556919   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:11:08.556993   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:11:08.567870   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:11:08.567888   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:11:08.567894   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:11:08.573441   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:11:08.573452   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:11:08.588408   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:11:08.588418   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:11:08.603024   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:11:08.603036   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:11:08.620606   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:11:08.620620   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:11:08.632532   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:11:08.632545   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:11:08.657780   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:11:08.657788   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:11:08.690157   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:11:08.690164   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:11:08.706846   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:11:08.706859   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:11:08.719108   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:11:08.719119   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:11:08.730389   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:11:08.730401   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:11:08.744626   11137 logs.go:123] Gathering logs for coredns [995fc5f291bc] ...
	I1205 11:11:08.744640   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995fc5f291bc"
	I1205 11:11:08.756243   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:11:08.756257   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:11:08.768043   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:11:08.768056   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:11:08.803717   11137 logs.go:123] Gathering logs for coredns [a9825cec9ee3] ...
	I1205 11:11:08.803732   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9825cec9ee3"
	I1205 11:11:09.790436   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:11:09.790601   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:11:09.808014   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:11:09.808116   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:11:09.822433   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:11:09.822521   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:11:09.834293   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:11:09.834365   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:11:09.844769   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:11:09.844843   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:11:09.854812   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:11:09.854898   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:11:09.865490   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:11:09.865576   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:11:09.876321   11277 logs.go:282] 0 containers: []
	W1205 11:11:09.876355   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:11:09.876422   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:11:09.887256   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:11:09.887270   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:11:09.887276   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:11:09.901191   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:11:09.901201   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:11:09.916245   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:11:09.916256   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:11:09.929100   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:11:09.929110   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:11:09.933416   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:11:09.933422   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:11:09.947110   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:11:09.947120   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:11:09.966928   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:11:09.966939   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:11:09.984725   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:11:09.984737   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:11:10.011547   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:11:10.011562   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:11:10.025931   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:11:10.025941   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:11:10.037563   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:11:10.037573   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:11:10.051819   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:11:10.051829   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:11:10.064917   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:11:10.064927   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:11:10.076561   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:11:10.076571   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:11:10.114947   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:11:10.114957   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:11:10.149899   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:11:10.149910   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:11:11.317945   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:11:12.673527   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:11:17.675760   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:11:17.675849   11277 kubeadm.go:597] duration metric: took 4m3.317255291s to restartPrimaryControlPlane
	W1205 11:11:17.675892   11277 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 11:11:17.675916   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1205 11:11:18.746079   11277 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.07014925s)
	I1205 11:11:18.746163   11277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 11:11:18.751190   11277 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 11:11:18.754084   11277 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 11:11:18.756615   11277 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 11:11:18.756625   11277 kubeadm.go:157] found existing configuration files:
	
	I1205 11:11:18.756655   11277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52022 /etc/kubernetes/admin.conf
	I1205 11:11:18.759369   11277 kubeadm.go:163] "https://control-plane.minikube.internal:52022" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52022 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 11:11:18.759398   11277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 11:11:18.762399   11277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52022 /etc/kubernetes/kubelet.conf
	I1205 11:11:18.765110   11277 kubeadm.go:163] "https://control-plane.minikube.internal:52022" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52022 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 11:11:18.765147   11277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 11:11:18.768001   11277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52022 /etc/kubernetes/controller-manager.conf
	I1205 11:11:18.770995   11277 kubeadm.go:163] "https://control-plane.minikube.internal:52022" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52022 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 11:11:18.771021   11277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 11:11:18.773681   11277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52022 /etc/kubernetes/scheduler.conf
	I1205 11:11:18.776315   11277 kubeadm.go:163] "https://control-plane.minikube.internal:52022" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52022 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 11:11:18.776343   11277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 11:11:18.779541   11277 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 11:11:18.797980   11277 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1205 11:11:18.798009   11277 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 11:11:18.848162   11277 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 11:11:18.848306   11277 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 11:11:18.848363   11277 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 11:11:18.896182   11277 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 11:11:18.904337   11277 out.go:235]   - Generating certificates and keys ...
	I1205 11:11:18.904371   11277 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 11:11:18.904419   11277 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 11:11:18.904466   11277 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 11:11:18.904499   11277 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 11:11:18.904537   11277 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 11:11:18.904576   11277 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 11:11:18.904605   11277 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 11:11:18.904652   11277 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 11:11:18.904784   11277 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 11:11:18.904913   11277 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 11:11:18.904970   11277 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 11:11:18.905025   11277 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 11:11:19.055209   11277 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 11:11:19.088878   11277 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 11:11:19.230081   11277 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 11:11:19.265358   11277 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 11:11:19.296923   11277 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 11:11:19.297316   11277 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 11:11:19.297420   11277 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 11:11:19.381943   11277 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 11:11:16.320593   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:11:16.320842   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:11:16.334809   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:11:16.334903   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:11:16.347688   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:11:16.347766   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:11:16.358386   11137 logs.go:282] 4 containers: [a9825cec9ee3 995fc5f291bc bd886f7b8aaf 03eca12adf82]
	I1205 11:11:16.358458   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:11:16.369492   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:11:16.369566   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:11:16.380411   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:11:16.380485   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:11:16.391393   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:11:16.391475   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:11:16.403165   11137 logs.go:282] 0 containers: []
	W1205 11:11:16.403176   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:11:16.403243   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:11:16.414013   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:11:16.414030   11137 logs.go:123] Gathering logs for coredns [a9825cec9ee3] ...
	I1205 11:11:16.414036   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9825cec9ee3"
	I1205 11:11:16.425883   11137 logs.go:123] Gathering logs for coredns [995fc5f291bc] ...
	I1205 11:11:16.425895   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995fc5f291bc"
	I1205 11:11:16.438570   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:11:16.438580   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:11:16.450259   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:11:16.450269   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:11:16.462188   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:11:16.462198   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:11:16.497936   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:11:16.497949   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:11:16.502558   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:11:16.502568   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:11:16.514803   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:11:16.514817   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:11:16.529414   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:11:16.529424   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:11:16.553246   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:11:16.553259   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:11:16.586928   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:11:16.586938   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:11:16.602391   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:11:16.602404   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:11:16.614075   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:11:16.614087   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:11:16.629645   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:11:16.629655   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:11:16.647126   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:11:16.647137   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:11:19.166749   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:11:19.390090   11277 out.go:235]   - Booting up control plane ...
	I1205 11:11:19.390153   11277 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 11:11:19.390197   11277 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 11:11:19.390231   11277 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 11:11:19.390288   11277 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 11:11:19.390363   11277 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 11:11:24.169018   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:11:24.169187   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:11:24.180701   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:11:24.180779   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:11:24.192307   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:11:24.192391   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:11:24.203252   11137 logs.go:282] 4 containers: [a9825cec9ee3 995fc5f291bc bd886f7b8aaf 03eca12adf82]
	I1205 11:11:24.203336   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:11:24.214357   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:11:24.214437   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:11:24.225760   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:11:24.225836   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:11:24.237277   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:11:24.237358   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:11:24.247597   11137 logs.go:282] 0 containers: []
	W1205 11:11:24.247609   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:11:24.247676   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:11:24.258021   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:11:24.258040   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:11:24.258046   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:11:24.273227   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:11:24.273237   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:11:24.284823   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:11:24.284834   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:11:24.296849   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:11:24.296862   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:11:24.320733   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:11:24.320740   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:11:24.354931   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:11:24.354941   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:11:24.359344   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:11:24.359351   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:11:24.370968   11137 logs.go:123] Gathering logs for coredns [995fc5f291bc] ...
	I1205 11:11:24.370979   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995fc5f291bc"
	I1205 11:11:24.382764   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:11:24.382775   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:11:24.398835   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:11:24.398851   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:11:24.417132   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:11:24.417147   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:11:24.452157   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:11:24.452168   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:11:24.466193   11137 logs.go:123] Gathering logs for coredns [a9825cec9ee3] ...
	I1205 11:11:24.466202   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9825cec9ee3"
	I1205 11:11:24.478415   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:11:24.478426   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:11:24.497452   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:11:24.497463   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:11:24.387237   11277 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.002480 seconds
	I1205 11:11:24.387318   11277 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 11:11:24.392466   11277 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 11:11:24.901199   11277 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 11:11:24.901312   11277 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-616000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 11:11:25.406334   11277 kubeadm.go:310] [bootstrap-token] Using token: r8icgo.cbvdhc0kia6v4pl5
	I1205 11:11:25.412496   11277 out.go:235]   - Configuring RBAC rules ...
	I1205 11:11:25.412566   11277 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 11:11:25.412627   11277 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 11:11:25.419282   11277 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 11:11:25.420312   11277 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 11:11:25.421274   11277 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 11:11:25.422147   11277 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 11:11:25.425289   11277 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 11:11:25.596453   11277 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 11:11:25.810445   11277 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 11:11:25.811224   11277 kubeadm.go:310] 
	I1205 11:11:25.811260   11277 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 11:11:25.811281   11277 kubeadm.go:310] 
	I1205 11:11:25.811359   11277 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 11:11:25.811364   11277 kubeadm.go:310] 
	I1205 11:11:25.811377   11277 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 11:11:25.811479   11277 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 11:11:25.811513   11277 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 11:11:25.811516   11277 kubeadm.go:310] 
	I1205 11:11:25.811639   11277 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 11:11:25.811647   11277 kubeadm.go:310] 
	I1205 11:11:25.811687   11277 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 11:11:25.811694   11277 kubeadm.go:310] 
	I1205 11:11:25.811725   11277 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 11:11:25.811788   11277 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 11:11:25.811837   11277 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 11:11:25.811839   11277 kubeadm.go:310] 
	I1205 11:11:25.811897   11277 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 11:11:25.811938   11277 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 11:11:25.811948   11277 kubeadm.go:310] 
	I1205 11:11:25.811988   11277 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token r8icgo.cbvdhc0kia6v4pl5 \
	I1205 11:11:25.812046   11277 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:88a7dd8c9efc476dd67085474405097045d2c1786f9e8e2a034455d9e105c30a \
	I1205 11:11:25.812060   11277 kubeadm.go:310] 	--control-plane 
	I1205 11:11:25.812063   11277 kubeadm.go:310] 
	I1205 11:11:25.812103   11277 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 11:11:25.812110   11277 kubeadm.go:310] 
	I1205 11:11:25.812147   11277 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token r8icgo.cbvdhc0kia6v4pl5 \
	I1205 11:11:25.812210   11277 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:88a7dd8c9efc476dd67085474405097045d2c1786f9e8e2a034455d9e105c30a 
	I1205 11:11:25.812271   11277 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 11:11:25.812283   11277 cni.go:84] Creating CNI manager for ""
	I1205 11:11:25.812291   11277 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:11:25.816949   11277 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 11:11:25.822927   11277 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 11:11:25.826576   11277 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 11:11:25.831616   11277 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 11:11:25.831680   11277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 11:11:25.831681   11277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-616000 minikube.k8s.io/updated_at=2024_12_05T11_11_25_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331 minikube.k8s.io/name=stopped-upgrade-616000 minikube.k8s.io/primary=true
	I1205 11:11:25.874293   11277 kubeadm.go:1113] duration metric: took 42.662542ms to wait for elevateKubeSystemPrivileges
	I1205 11:11:25.874299   11277 ops.go:34] apiserver oom_adj: -16
	I1205 11:11:25.874312   11277 kubeadm.go:394] duration metric: took 4m11.529325333s to StartCluster
	I1205 11:11:25.874323   11277 settings.go:142] acquiring lock: {Name:mk685c3b4b58f394644fceb0edca00785ff86d9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:11:25.874422   11277 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 11:11:25.874874   11277 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20052-8600/kubeconfig: {Name:mkb6577356fc2312bf9b329fd967969d2d30b8a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:11:25.875115   11277 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:11:25.875122   11277 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 11:11:25.875157   11277 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-616000"
	I1205 11:11:25.875169   11277 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-616000"
	W1205 11:11:25.875173   11277 addons.go:243] addon storage-provisioner should already be in state true
	I1205 11:11:25.875184   11277 host.go:66] Checking if "stopped-upgrade-616000" exists ...
	I1205 11:11:25.875203   11277 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-616000"
	I1205 11:11:25.875216   11277 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-616000"
	I1205 11:11:25.875222   11277 config.go:182] Loaded profile config "stopped-upgrade-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1205 11:11:25.878923   11277 out.go:177] * Verifying Kubernetes components...
	I1205 11:11:25.879614   11277 kapi.go:59] client config for stopped-upgrade-616000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/client.key", CAFile:"/Users/jenkins/minikube-integration/20052-8600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1046c7740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 11:11:25.883181   11277 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-616000"
	W1205 11:11:25.883187   11277 addons.go:243] addon default-storageclass should already be in state true
	I1205 11:11:25.883195   11277 host.go:66] Checking if "stopped-upgrade-616000" exists ...
	I1205 11:11:25.883753   11277 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 11:11:25.883758   11277 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 11:11:25.883763   11277 sshutil.go:53] new ssh client: &{IP:localhost Port:51987 SSHKeyPath:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/stopped-upgrade-616000/id_rsa Username:docker}
	I1205 11:11:25.886794   11277 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 11:11:27.011132   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:11:25.890885   11277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 11:11:25.892121   11277 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 11:11:25.892125   11277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 11:11:25.892130   11277 sshutil.go:53] new ssh client: &{IP:localhost Port:51987 SSHKeyPath:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/stopped-upgrade-616000/id_rsa Username:docker}
	I1205 11:11:25.964077   11277 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 11:11:25.969971   11277 api_server.go:52] waiting for apiserver process to appear ...
	I1205 11:11:25.970028   11277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 11:11:25.973766   11277 api_server.go:72] duration metric: took 98.640625ms to wait for apiserver process to appear ...
	I1205 11:11:25.973775   11277 api_server.go:88] waiting for apiserver healthz status ...
	I1205 11:11:25.973783   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:11:26.012619   11277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 11:11:26.021323   11277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 11:11:26.379698   11277 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1205 11:11:26.379711   11277 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1205 11:11:32.012837   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:11:32.013068   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:11:32.029862   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:11:32.029964   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:11:32.044953   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:11:32.045036   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:11:32.056844   11137 logs.go:282] 4 containers: [a9825cec9ee3 995fc5f291bc bd886f7b8aaf 03eca12adf82]
	I1205 11:11:32.056924   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:11:32.067966   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:11:32.068053   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:11:32.078465   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:11:32.078547   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:11:32.089249   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:11:32.089331   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:11:32.100102   11137 logs.go:282] 0 containers: []
	W1205 11:11:32.100114   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:11:32.100190   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:11:32.113814   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:11:32.113833   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:11:32.113838   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:11:32.147429   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:11:32.147441   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:11:32.167670   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:11:32.167682   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:11:32.182258   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:11:32.182269   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:11:32.193833   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:11:32.193847   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:11:32.208567   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:11:32.208580   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:11:32.230361   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:11:32.230375   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:11:32.242712   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:11:32.242723   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:11:32.257184   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:11:32.257198   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:11:32.261668   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:11:32.261677   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:11:32.299942   11137 logs.go:123] Gathering logs for coredns [995fc5f291bc] ...
	I1205 11:11:32.299954   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995fc5f291bc"
	I1205 11:11:32.312143   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:11:32.312156   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:11:32.323550   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:11:32.323561   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:11:32.335380   11137 logs.go:123] Gathering logs for coredns [a9825cec9ee3] ...
	I1205 11:11:32.335393   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9825cec9ee3"
	I1205 11:11:32.347326   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:11:32.347339   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:11:30.975876   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:11:30.975905   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:11:34.873462   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:11:35.976146   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:11:35.976189   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:11:39.875671   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:11:39.875790   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:11:39.887112   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:11:39.887199   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:11:39.897488   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:11:39.897562   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:11:39.909902   11137 logs.go:282] 4 containers: [a9825cec9ee3 995fc5f291bc bd886f7b8aaf 03eca12adf82]
	I1205 11:11:39.909986   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:11:39.924546   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:11:39.924655   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:11:39.936976   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:11:39.937059   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:11:39.949513   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:11:39.949604   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:11:39.961475   11137 logs.go:282] 0 containers: []
	W1205 11:11:39.961487   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:11:39.961560   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:11:39.972624   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:11:39.972642   11137 logs.go:123] Gathering logs for coredns [a9825cec9ee3] ...
	I1205 11:11:39.972648   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9825cec9ee3"
	I1205 11:11:39.986089   11137 logs.go:123] Gathering logs for coredns [995fc5f291bc] ...
	I1205 11:11:39.986101   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995fc5f291bc"
	I1205 11:11:39.999626   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:11:39.999637   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:11:40.011494   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:11:40.011504   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:11:40.025029   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:11:40.025042   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:11:40.062542   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:11:40.062563   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:11:40.101258   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:11:40.101272   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:11:40.116667   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:11:40.116683   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:11:40.137835   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:11:40.137850   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:11:40.158152   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:11:40.158164   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:11:40.170886   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:11:40.170899   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:11:40.196088   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:11:40.196108   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:11:40.208572   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:11:40.208585   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:11:40.224212   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:11:40.224226   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:11:40.237886   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:11:40.237900   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:11:42.744920   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:11:40.976554   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:11:40.976579   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:11:47.747175   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:11:47.747446   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:11:47.769642   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:11:47.769744   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:11:47.783713   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:11:47.783807   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:11:47.797443   11137 logs.go:282] 4 containers: [a9825cec9ee3 995fc5f291bc bd886f7b8aaf 03eca12adf82]
	I1205 11:11:47.797529   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:11:47.812935   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:11:47.813006   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:11:47.823347   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:11:47.823425   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:11:47.833800   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:11:47.833870   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:11:47.844462   11137 logs.go:282] 0 containers: []
	W1205 11:11:47.844478   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:11:47.844549   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:11:47.854771   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:11:47.854788   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:11:47.854794   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:11:47.887764   11137 logs.go:123] Gathering logs for coredns [a9825cec9ee3] ...
	I1205 11:11:47.887772   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9825cec9ee3"
	I1205 11:11:47.903520   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:11:47.903532   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:11:47.915307   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:11:47.915320   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:11:47.919908   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:11:47.919917   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:11:47.932120   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:11:47.932134   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:11:47.957002   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:11:47.957010   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:11:47.971216   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:11:47.971226   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:11:47.983301   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:11:47.983312   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:11:47.995302   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:11:47.995316   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:11:48.014226   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:11:48.014238   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:11:48.025700   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:11:48.025714   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:11:48.060806   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:11:48.060817   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:11:48.075273   11137 logs.go:123] Gathering logs for coredns [995fc5f291bc] ...
	I1205 11:11:48.075284   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995fc5f291bc"
	I1205 11:11:48.087005   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:11:48.087017   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:11:45.976971   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:11:45.976996   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:11:50.607574   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:11:50.977504   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:11:50.977524   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:11:55.978676   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:11:55.978704   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1205 11:11:56.382189   11277 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1205 11:11:56.388444   11277 out.go:177] * Enabled addons: storage-provisioner
	I1205 11:11:55.609928   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:11:55.610198   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:11:55.633279   11137 logs.go:282] 1 containers: [6f8a29fd4fab]
	I1205 11:11:55.633407   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:11:55.649820   11137 logs.go:282] 1 containers: [ce151b55ec74]
	I1205 11:11:55.649915   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:11:55.663420   11137 logs.go:282] 4 containers: [a9825cec9ee3 995fc5f291bc bd886f7b8aaf 03eca12adf82]
	I1205 11:11:55.663503   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:11:55.674654   11137 logs.go:282] 1 containers: [3be66de574ff]
	I1205 11:11:55.674739   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:11:55.685296   11137 logs.go:282] 1 containers: [b2e4bca680f3]
	I1205 11:11:55.685382   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:11:55.696721   11137 logs.go:282] 1 containers: [ace6598b01c4]
	I1205 11:11:55.696803   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:11:55.707654   11137 logs.go:282] 0 containers: []
	W1205 11:11:55.707670   11137 logs.go:284] No container was found matching "kindnet"
	I1205 11:11:55.707740   11137 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:11:55.719503   11137 logs.go:282] 1 containers: [f07ec81fd07a]
	I1205 11:11:55.719519   11137 logs.go:123] Gathering logs for etcd [ce151b55ec74] ...
	I1205 11:11:55.719525   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce151b55ec74"
	I1205 11:11:55.733796   11137 logs.go:123] Gathering logs for coredns [03eca12adf82] ...
	I1205 11:11:55.733808   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03eca12adf82"
	I1205 11:11:55.745691   11137 logs.go:123] Gathering logs for kube-scheduler [3be66de574ff] ...
	I1205 11:11:55.745702   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3be66de574ff"
	I1205 11:11:55.766654   11137 logs.go:123] Gathering logs for kubelet ...
	I1205 11:11:55.766666   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:11:55.801098   11137 logs.go:123] Gathering logs for coredns [995fc5f291bc] ...
	I1205 11:11:55.801108   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 995fc5f291bc"
	I1205 11:11:55.812827   11137 logs.go:123] Gathering logs for coredns [bd886f7b8aaf] ...
	I1205 11:11:55.812840   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd886f7b8aaf"
	I1205 11:11:55.825394   11137 logs.go:123] Gathering logs for kube-proxy [b2e4bca680f3] ...
	I1205 11:11:55.825407   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e4bca680f3"
	I1205 11:11:55.837678   11137 logs.go:123] Gathering logs for kube-controller-manager [ace6598b01c4] ...
	I1205 11:11:55.837690   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ace6598b01c4"
	I1205 11:11:55.855432   11137 logs.go:123] Gathering logs for storage-provisioner [f07ec81fd07a] ...
	I1205 11:11:55.855442   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f07ec81fd07a"
	I1205 11:11:55.867238   11137 logs.go:123] Gathering logs for Docker ...
	I1205 11:11:55.867249   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:11:55.889564   11137 logs.go:123] Gathering logs for dmesg ...
	I1205 11:11:55.889573   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:11:55.894104   11137 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:11:55.894113   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:11:55.929418   11137 logs.go:123] Gathering logs for kube-apiserver [6f8a29fd4fab] ...
	I1205 11:11:55.929432   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f8a29fd4fab"
	I1205 11:11:55.944629   11137 logs.go:123] Gathering logs for coredns [a9825cec9ee3] ...
	I1205 11:11:55.944643   11137 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a9825cec9ee3"
	I1205 11:11:55.956939   11137 logs.go:123] Gathering logs for container status ...
	I1205 11:11:55.956952   11137 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:11:58.471522   11137 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:11:56.396443   11277 addons.go:510] duration metric: took 30.521238208s for enable addons: enabled=[storage-provisioner]
	I1205 11:12:03.472396   11137 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:12:03.477111   11137 out.go:201] 
	W1205 11:12:03.481137   11137 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1205 11:12:03.481151   11137 out.go:270] * 
	W1205 11:12:03.482255   11137 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:12:03.493538   11137 out.go:201] 
	I1205 11:12:00.979720   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:12:00.979741   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:12:05.981059   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:12:05.981079   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:12:10.982256   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:12:10.982303   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Thu 2024-12-05 19:03:02 UTC, ends at Thu 2024-12-05 19:12:19 UTC. --
	Dec 05 19:12:04 running-upgrade-829000 cri-dockerd[3073]: time="2024-12-05T19:12:04Z" level=error msg="ContainerStats resp: {0x4000814300 linux}"
	Dec 05 19:12:04 running-upgrade-829000 cri-dockerd[3073]: time="2024-12-05T19:12:04Z" level=error msg="ContainerStats resp: {0x40008fb900 linux}"
	Dec 05 19:12:04 running-upgrade-829000 cri-dockerd[3073]: time="2024-12-05T19:12:04Z" level=error msg="ContainerStats resp: {0x4000815500 linux}"
	Dec 05 19:12:04 running-upgrade-829000 cri-dockerd[3073]: time="2024-12-05T19:12:04Z" level=error msg="ContainerStats resp: {0x400026ed00 linux}"
	Dec 05 19:12:04 running-upgrade-829000 cri-dockerd[3073]: time="2024-12-05T19:12:04Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Dec 05 19:12:05 running-upgrade-829000 cri-dockerd[3073]: time="2024-12-05T19:12:05Z" level=error msg="ContainerStats resp: {0x4000391400 linux}"
	Dec 05 19:12:06 running-upgrade-829000 cri-dockerd[3073]: time="2024-12-05T19:12:06Z" level=error msg="ContainerStats resp: {0x40008ace80 linux}"
	Dec 05 19:12:06 running-upgrade-829000 cri-dockerd[3073]: time="2024-12-05T19:12:06Z" level=error msg="ContainerStats resp: {0x40008ad300 linux}"
	Dec 05 19:12:06 running-upgrade-829000 cri-dockerd[3073]: time="2024-12-05T19:12:06Z" level=error msg="ContainerStats resp: {0x4000949740 linux}"
	Dec 05 19:12:06 running-upgrade-829000 cri-dockerd[3073]: time="2024-12-05T19:12:06Z" level=error msg="ContainerStats resp: {0x40008ad880 linux}"
	Dec 05 19:12:06 running-upgrade-829000 cri-dockerd[3073]: time="2024-12-05T19:12:06Z" level=error msg="ContainerStats resp: {0x40008fa800 linux}"
	Dec 05 19:12:06 running-upgrade-829000 cri-dockerd[3073]: time="2024-12-05T19:12:06Z" level=error msg="ContainerStats resp: {0x40008fac40 linux}"
	Dec 05 19:12:06 running-upgrade-829000 cri-dockerd[3073]: time="2024-12-05T19:12:06Z" level=error msg="ContainerStats resp: {0x40008144c0 linux}"
	Dec 05 19:12:09 running-upgrade-829000 cri-dockerd[3073]: time="2024-12-05T19:12:09Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Dec 05 19:12:14 running-upgrade-829000 cri-dockerd[3073]: time="2024-12-05T19:12:14Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Dec 05 19:12:16 running-upgrade-829000 cri-dockerd[3073]: time="2024-12-05T19:12:16Z" level=error msg="ContainerStats resp: {0x4000415f00 linux}"
	Dec 05 19:12:16 running-upgrade-829000 cri-dockerd[3073]: time="2024-12-05T19:12:16Z" level=error msg="ContainerStats resp: {0x400063f200 linux}"
	Dec 05 19:12:17 running-upgrade-829000 cri-dockerd[3073]: time="2024-12-05T19:12:17Z" level=error msg="ContainerStats resp: {0x400026e700 linux}"
	Dec 05 19:12:18 running-upgrade-829000 cri-dockerd[3073]: time="2024-12-05T19:12:18Z" level=error msg="ContainerStats resp: {0x40008fb300 linux}"
	Dec 05 19:12:18 running-upgrade-829000 cri-dockerd[3073]: time="2024-12-05T19:12:18Z" level=error msg="ContainerStats resp: {0x40008fb4c0 linux}"
	Dec 05 19:12:18 running-upgrade-829000 cri-dockerd[3073]: time="2024-12-05T19:12:18Z" level=error msg="ContainerStats resp: {0x40008fb900 linux}"
	Dec 05 19:12:18 running-upgrade-829000 cri-dockerd[3073]: time="2024-12-05T19:12:18Z" level=error msg="ContainerStats resp: {0x40000b9e40 linux}"
	Dec 05 19:12:18 running-upgrade-829000 cri-dockerd[3073]: time="2024-12-05T19:12:18Z" level=error msg="ContainerStats resp: {0x4000776300 linux}"
	Dec 05 19:12:18 running-upgrade-829000 cri-dockerd[3073]: time="2024-12-05T19:12:18Z" level=error msg="ContainerStats resp: {0x40007764c0 linux}"
	Dec 05 19:12:18 running-upgrade-829000 cri-dockerd[3073]: time="2024-12-05T19:12:18Z" level=error msg="ContainerStats resp: {0x40008b8700 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	ec3f993a25fbd       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   e9108e7848cd4
	0bba13bd14201       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   876c0cc984bad
	a9825cec9ee3d       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   876c0cc984bad
	995fc5f291bc9       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   e9108e7848cd4
	f07ec81fd07ae       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   fcf5eaa313fec
	b2e4bca680f33       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   1d30f5710869c
	3be66de574ffb       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   697dc3a29f34d
	6f8a29fd4fabc       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   fde8a4b512701
	ace6598b01c46       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   f6ffd31f5cc13
	ce151b55ec742       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   d95671a1ac529
	
	
	==> coredns [0bba13bd1420] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8341340355194830547.4419617325555176201. HINFO: read udp 10.244.0.3:46644->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8341340355194830547.4419617325555176201. HINFO: read udp 10.244.0.3:37880->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8341340355194830547.4419617325555176201. HINFO: read udp 10.244.0.3:54064->10.0.2.3:53: i/o timeout
	
	
	==> coredns [995fc5f291bc] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6285371653961227261.8082322539701033324. HINFO: read udp 10.244.0.2:33176->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6285371653961227261.8082322539701033324. HINFO: read udp 10.244.0.2:41512->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6285371653961227261.8082322539701033324. HINFO: read udp 10.244.0.2:51945->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6285371653961227261.8082322539701033324. HINFO: read udp 10.244.0.2:54632->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6285371653961227261.8082322539701033324. HINFO: read udp 10.244.0.2:38376->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6285371653961227261.8082322539701033324. HINFO: read udp 10.244.0.2:55081->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6285371653961227261.8082322539701033324. HINFO: read udp 10.244.0.2:49305->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6285371653961227261.8082322539701033324. HINFO: read udp 10.244.0.2:40016->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6285371653961227261.8082322539701033324. HINFO: read udp 10.244.0.2:51537->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a9825cec9ee3] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1454899623491972313.3923777210942290972. HINFO: read udp 10.244.0.3:60347->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1454899623491972313.3923777210942290972. HINFO: read udp 10.244.0.3:36380->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1454899623491972313.3923777210942290972. HINFO: read udp 10.244.0.3:33714->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1454899623491972313.3923777210942290972. HINFO: read udp 10.244.0.3:54427->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1454899623491972313.3923777210942290972. HINFO: read udp 10.244.0.3:41895->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1454899623491972313.3923777210942290972. HINFO: read udp 10.244.0.3:48163->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1454899623491972313.3923777210942290972. HINFO: read udp 10.244.0.3:47968->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1454899623491972313.3923777210942290972. HINFO: read udp 10.244.0.3:47672->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1454899623491972313.3923777210942290972. HINFO: read udp 10.244.0.3:41971->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1454899623491972313.3923777210942290972. HINFO: read udp 10.244.0.3:43812->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ec3f993a25fb] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 958039176780666608.8671501378230771983. HINFO: read udp 10.244.0.2:49662->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 958039176780666608.8671501378230771983. HINFO: read udp 10.244.0.2:54609->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 958039176780666608.8671501378230771983. HINFO: read udp 10.244.0.2:34554->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-829000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-829000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331
	                    minikube.k8s.io/name=running-upgrade-829000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_05T11_08_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 19:07:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-829000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 19:12:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 19:08:02 +0000   Thu, 05 Dec 2024 19:07:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 19:08:02 +0000   Thu, 05 Dec 2024 19:07:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 19:08:02 +0000   Thu, 05 Dec 2024 19:07:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 19:08:02 +0000   Thu, 05 Dec 2024 19:08:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-829000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc2475f4b2f148828b44aed1c698e02f
	  System UUID:                dc2475f4b2f148828b44aed1c698e02f
	  Boot ID:                    e519c5c5-ebdc-47cc-8f7c-592c515d8c41
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-kcm44                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 coredns-6d4b75cb6d-nhhd5                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 etcd-running-upgrade-829000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m16s
	  kube-system                 kube-apiserver-running-upgrade-829000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-829000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-proxy-55lpg                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-829000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m2s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-829000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-829000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-829000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-829000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m4s   node-controller  Node running-upgrade-829000 event: Registered Node running-upgrade-829000 in Controller
	
	
	==> dmesg <==
	[  +1.796949] systemd-fstab-generator[880]: Ignoring "noauto" for root device
	[  +0.086583] systemd-fstab-generator[891]: Ignoring "noauto" for root device
	[  +0.080290] systemd-fstab-generator[902]: Ignoring "noauto" for root device
	[  +1.139490] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.092713] systemd-fstab-generator[1052]: Ignoring "noauto" for root device
	[  +0.076687] systemd-fstab-generator[1063]: Ignoring "noauto" for root device
	[  +2.384161] systemd-fstab-generator[1290]: Ignoring "noauto" for root device
	[  +9.642798] systemd-fstab-generator[1933]: Ignoring "noauto" for root device
	[  +2.551179] systemd-fstab-generator[2206]: Ignoring "noauto" for root device
	[  +0.150166] systemd-fstab-generator[2241]: Ignoring "noauto" for root device
	[  +0.090586] systemd-fstab-generator[2254]: Ignoring "noauto" for root device
	[  +0.106923] systemd-fstab-generator[2270]: Ignoring "noauto" for root device
	[ +12.675148] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.213693] systemd-fstab-generator[3028]: Ignoring "noauto" for root device
	[  +0.089844] systemd-fstab-generator[3041]: Ignoring "noauto" for root device
	[  +0.077787] systemd-fstab-generator[3052]: Ignoring "noauto" for root device
	[  +0.094067] systemd-fstab-generator[3066]: Ignoring "noauto" for root device
	[  +2.382625] systemd-fstab-generator[3216]: Ignoring "noauto" for root device
	[  +3.162410] systemd-fstab-generator[3592]: Ignoring "noauto" for root device
	[  +1.236470] systemd-fstab-generator[3888]: Ignoring "noauto" for root device
	[Dec 5 19:04] kauditd_printk_skb: 68 callbacks suppressed
	[Dec 5 19:07] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.534607] systemd-fstab-generator[11915]: Ignoring "noauto" for root device
	[Dec 5 19:08] systemd-fstab-generator[12515]: Ignoring "noauto" for root device
	[  +0.466334] systemd-fstab-generator[12645]: Ignoring "noauto" for root device
	
	
	==> etcd [ce151b55ec74] <==
	{"level":"info","ts":"2024-12-05T19:07:57.484Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-12-05T19:07:57.485Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-05T19:07:57.485Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-05T19:07:57.485Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-12-05T19:07:57.485Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-12-05T19:07:57.485Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-12-05T19:07:57.485Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-12-05T19:07:58.482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-12-05T19:07:58.482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-12-05T19:07:58.482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-12-05T19:07:58.482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-12-05T19:07:58.482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-12-05T19:07:58.482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-12-05T19:07:58.482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-12-05T19:07:58.483Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-829000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-05T19:07:58.483Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T19:07:58.483Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T19:07:58.484Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-12-05T19:07:58.484Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-05T19:07:58.484Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-05T19:07:58.484Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-05T19:07:58.485Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T19:07:58.485Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T19:07:58.485Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T19:07:58.485Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 19:12:19 up 9 min,  0 users,  load average: 0.02, 0.16, 0.11
	Linux running-upgrade-829000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [6f8a29fd4fab] <==
	I1205 19:07:59.726185       1 controller.go:611] quota admission added evaluator for: namespaces
	I1205 19:07:59.763115       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1205 19:07:59.763186       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1205 19:07:59.763400       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1205 19:07:59.764141       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1205 19:07:59.764996       1 cache.go:39] Caches are synced for autoregister controller
	I1205 19:07:59.791478       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1205 19:08:00.492144       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1205 19:08:00.671337       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1205 19:08:00.685621       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1205 19:08:00.685723       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1205 19:08:00.821759       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1205 19:08:00.834415       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1205 19:08:00.930635       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W1205 19:08:00.932664       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I1205 19:08:00.933112       1 controller.go:611] quota admission added evaluator for: endpoints
	I1205 19:08:00.934425       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1205 19:08:01.801945       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1205 19:08:02.478521       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1205 19:08:02.482040       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I1205 19:08:02.495348       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1205 19:08:02.568434       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1205 19:08:15.752787       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I1205 19:08:15.769741       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I1205 19:08:16.934682       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [ace6598b01c4] <==
	I1205 19:08:15.747011       1 shared_informer.go:262] Caches are synced for PVC protection
	I1205 19:08:15.748151       1 shared_informer.go:262] Caches are synced for daemon sets
	I1205 19:08:15.748157       1 shared_informer.go:262] Caches are synced for HPA
	I1205 19:08:15.755227       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-55lpg"
	I1205 19:08:15.761509       1 shared_informer.go:262] Caches are synced for attach detach
	I1205 19:08:15.763939       1 shared_informer.go:262] Caches are synced for deployment
	I1205 19:08:15.765696       1 shared_informer.go:262] Caches are synced for taint
	I1205 19:08:15.765711       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I1205 19:08:15.765741       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W1205 19:08:15.765761       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-829000. Assuming now as a timestamp.
	I1205 19:08:15.765776       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I1205 19:08:15.765877       1 event.go:294] "Event occurred" object="running-upgrade-829000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-829000 event: Registered Node running-upgrade-829000 in Controller"
	I1205 19:08:15.771695       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I1205 19:08:15.787038       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-kcm44"
	I1205 19:08:15.794653       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-nhhd5"
	I1205 19:08:15.796701       1 shared_informer.go:262] Caches are synced for disruption
	I1205 19:08:15.796713       1 disruption.go:371] Sending events to api server.
	I1205 19:08:15.799682       1 shared_informer.go:262] Caches are synced for job
	I1205 19:08:15.805353       1 shared_informer.go:262] Caches are synced for cronjob
	I1205 19:08:15.811024       1 shared_informer.go:262] Caches are synced for ReplicationController
	I1205 19:08:15.852493       1 shared_informer.go:262] Caches are synced for resource quota
	I1205 19:08:15.869369       1 shared_informer.go:262] Caches are synced for resource quota
	I1205 19:08:16.283461       1 shared_informer.go:262] Caches are synced for garbage collector
	I1205 19:08:16.313685       1 shared_informer.go:262] Caches are synced for garbage collector
	I1205 19:08:16.313704       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [b2e4bca680f3] <==
	I1205 19:08:16.921161       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I1205 19:08:16.921187       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I1205 19:08:16.921196       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1205 19:08:16.932811       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1205 19:08:16.932825       1 server_others.go:206] "Using iptables Proxier"
	I1205 19:08:16.932842       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1205 19:08:16.932984       1 server.go:661] "Version info" version="v1.24.1"
	I1205 19:08:16.932993       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 19:08:16.933271       1 config.go:317] "Starting service config controller"
	I1205 19:08:16.933284       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1205 19:08:16.933292       1 config.go:226] "Starting endpoint slice config controller"
	I1205 19:08:16.933293       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1205 19:08:16.933563       1 config.go:444] "Starting node config controller"
	I1205 19:08:16.933596       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1205 19:08:17.033321       1 shared_informer.go:262] Caches are synced for service config
	I1205 19:08:17.033351       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1205 19:08:17.033861       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [3be66de574ff] <==
	W1205 19:07:59.722722       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1205 19:07:59.722744       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1205 19:07:59.722769       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1205 19:07:59.722974       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1205 19:07:59.723019       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 19:07:59.723027       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1205 19:07:59.723080       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1205 19:07:59.723117       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1205 19:07:59.723166       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 19:07:59.723172       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1205 19:07:59.723184       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1205 19:07:59.723210       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1205 19:07:59.723319       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 19:07:59.723327       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1205 19:07:59.723384       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1205 19:07:59.723392       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1205 19:07:59.723450       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 19:07:59.723472       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1205 19:08:00.702984       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1205 19:08:00.703143       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1205 19:08:00.723691       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1205 19:08:00.723712       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1205 19:08:00.771374       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 19:08:00.771470       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1205 19:08:01.213387       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Thu 2024-12-05 19:03:02 UTC, ends at Thu 2024-12-05 19:12:19 UTC. --
	Dec 05 19:08:02 running-upgrade-829000 kubelet[12521]: I1205 19:08:02.740632   12521 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73535fc5c4ccdc5506224d9ac8208701-usr-share-ca-certificates\") pod \"kube-controller-manager-running-upgrade-829000\" (UID: \"73535fc5c4ccdc5506224d9ac8208701\") " pod="kube-system/kube-controller-manager-running-upgrade-829000"
	Dec 05 19:08:03 running-upgrade-829000 kubelet[12521]: I1205 19:08:03.529650   12521 apiserver.go:52] "Watching apiserver"
	Dec 05 19:08:03 running-upgrade-829000 kubelet[12521]: I1205 19:08:03.946863   12521 reconciler.go:157] "Reconciler: start to sync state"
	Dec 05 19:08:04 running-upgrade-829000 kubelet[12521]: E1205 19:08:04.113839   12521 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-829000\" already exists" pod="kube-system/etcd-running-upgrade-829000"
	Dec 05 19:08:04 running-upgrade-829000 kubelet[12521]: E1205 19:08:04.313323   12521 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-829000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-829000"
	Dec 05 19:08:04 running-upgrade-829000 kubelet[12521]: E1205 19:08:04.513176   12521 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-829000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-829000"
	Dec 05 19:08:15 running-upgrade-829000 kubelet[12521]: I1205 19:08:15.719164   12521 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 05 19:08:15 running-upgrade-829000 kubelet[12521]: I1205 19:08:15.719609   12521 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 05 19:08:15 running-upgrade-829000 kubelet[12521]: I1205 19:08:15.757789   12521 topology_manager.go:200] "Topology Admit Handler"
	Dec 05 19:08:15 running-upgrade-829000 kubelet[12521]: I1205 19:08:15.778179   12521 topology_manager.go:200] "Topology Admit Handler"
	Dec 05 19:08:15 running-upgrade-829000 kubelet[12521]: I1205 19:08:15.792580   12521 topology_manager.go:200] "Topology Admit Handler"
	Dec 05 19:08:15 running-upgrade-829000 kubelet[12521]: I1205 19:08:15.797908   12521 topology_manager.go:200] "Topology Admit Handler"
	Dec 05 19:08:15 running-upgrade-829000 kubelet[12521]: I1205 19:08:15.919541   12521 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/979c40d2-9b4b-4d96-8614-beae1353747c-lib-modules\") pod \"kube-proxy-55lpg\" (UID: \"979c40d2-9b4b-4d96-8614-beae1353747c\") " pod="kube-system/kube-proxy-55lpg"
	Dec 05 19:08:15 running-upgrade-829000 kubelet[12521]: I1205 19:08:15.919591   12521 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ab896686-0d36-496d-a587-35b66b9a379a-tmp\") pod \"storage-provisioner\" (UID: \"ab896686-0d36-496d-a587-35b66b9a379a\") " pod="kube-system/storage-provisioner"
	Dec 05 19:08:15 running-upgrade-829000 kubelet[12521]: I1205 19:08:15.919606   12521 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd957\" (UniqueName: \"kubernetes.io/projected/4d17563d-a7a6-458a-bd08-b9b23d23ce5b-kube-api-access-fd957\") pod \"coredns-6d4b75cb6d-kcm44\" (UID: \"4d17563d-a7a6-458a-bd08-b9b23d23ce5b\") " pod="kube-system/coredns-6d4b75cb6d-kcm44"
	Dec 05 19:08:15 running-upgrade-829000 kubelet[12521]: I1205 19:08:15.919619   12521 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np46h\" (UniqueName: \"kubernetes.io/projected/979c40d2-9b4b-4d96-8614-beae1353747c-kube-api-access-np46h\") pod \"kube-proxy-55lpg\" (UID: \"979c40d2-9b4b-4d96-8614-beae1353747c\") " pod="kube-system/kube-proxy-55lpg"
	Dec 05 19:08:15 running-upgrade-829000 kubelet[12521]: I1205 19:08:15.919635   12521 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkffs\" (UniqueName: \"kubernetes.io/projected/ab896686-0d36-496d-a587-35b66b9a379a-kube-api-access-jkffs\") pod \"storage-provisioner\" (UID: \"ab896686-0d36-496d-a587-35b66b9a379a\") " pod="kube-system/storage-provisioner"
	Dec 05 19:08:15 running-upgrade-829000 kubelet[12521]: I1205 19:08:15.919648   12521 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/979c40d2-9b4b-4d96-8614-beae1353747c-kube-proxy\") pod \"kube-proxy-55lpg\" (UID: \"979c40d2-9b4b-4d96-8614-beae1353747c\") " pod="kube-system/kube-proxy-55lpg"
	Dec 05 19:08:15 running-upgrade-829000 kubelet[12521]: I1205 19:08:15.919659   12521 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d17563d-a7a6-458a-bd08-b9b23d23ce5b-config-volume\") pod \"coredns-6d4b75cb6d-kcm44\" (UID: \"4d17563d-a7a6-458a-bd08-b9b23d23ce5b\") " pod="kube-system/coredns-6d4b75cb6d-kcm44"
	Dec 05 19:08:15 running-upgrade-829000 kubelet[12521]: I1205 19:08:15.919672   12521 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/979c40d2-9b4b-4d96-8614-beae1353747c-xtables-lock\") pod \"kube-proxy-55lpg\" (UID: \"979c40d2-9b4b-4d96-8614-beae1353747c\") " pod="kube-system/kube-proxy-55lpg"
	Dec 05 19:08:15 running-upgrade-829000 kubelet[12521]: I1205 19:08:15.919684   12521 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97be093a-016d-403e-94ec-b3e8e61aca22-config-volume\") pod \"coredns-6d4b75cb6d-nhhd5\" (UID: \"97be093a-016d-403e-94ec-b3e8e61aca22\") " pod="kube-system/coredns-6d4b75cb6d-nhhd5"
	Dec 05 19:08:15 running-upgrade-829000 kubelet[12521]: I1205 19:08:15.919696   12521 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m98s\" (UniqueName: \"kubernetes.io/projected/97be093a-016d-403e-94ec-b3e8e61aca22-kube-api-access-8m98s\") pod \"coredns-6d4b75cb6d-nhhd5\" (UID: \"97be093a-016d-403e-94ec-b3e8e61aca22\") " pod="kube-system/coredns-6d4b75cb6d-nhhd5"
	Dec 05 19:08:16 running-upgrade-829000 kubelet[12521]: I1205 19:08:16.655297   12521 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="876c0cc984bad553ec09157f533449e97687e0487890de1dc62fed7fc4cad380"
	Dec 05 19:12:04 running-upgrade-829000 kubelet[12521]: I1205 19:12:04.901297   12521 scope.go:110] "RemoveContainer" containerID="bd886f7b8aafc73b3cfd8c98e2241f703b4d6e61e12b92dc2730a51521d5e0eb"
	Dec 05 19:12:04 running-upgrade-829000 kubelet[12521]: I1205 19:12:04.916858   12521 scope.go:110] "RemoveContainer" containerID="03eca12adf82a087299f281b94119d799c552e68e26366b434c51a1134ed62c2"
	
	
	==> storage-provisioner [f07ec81fd07a] <==
	I1205 19:08:17.165062       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 19:08:17.169940       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 19:08:17.170000       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 19:08:17.174391       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 19:08:17.174539       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"14e21c02-f386-4e81-a47f-2672136a7682", APIVersion:"v1", ResourceVersion:"369", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-829000_c398b7c1-f6bf-4474-bc4c-2d6813e90eed became leader
	I1205 19:08:17.174553       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-829000_c398b7c1-f6bf-4474-bc4c-2d6813e90eed!
	I1205 19:08:17.275763       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-829000_c398b7c1-f6bf-4474-bc4c-2d6813e90eed!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-829000 -n running-upgrade-829000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-829000 -n running-upgrade-829000: exit status 2 (15.628865542s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-829000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-829000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-829000
--- FAIL: TestRunningBinaryUpgrade (601.03s)

                                                
                                    
x
+
TestKubernetesUpgrade (17.34s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-763000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-763000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.842765s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-763000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-763000" primary control-plane node in "kubernetes-upgrade-763000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-763000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:05:34.917366   11205 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:05:34.917531   11205 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:05:34.917535   11205 out.go:358] Setting ErrFile to fd 2...
	I1205 11:05:34.917538   11205 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:05:34.917655   11205 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:05:34.918812   11205 out.go:352] Setting JSON to false
	I1205 11:05:34.937101   11205 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5706,"bootTime":1733419828,"procs":545,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 11:05:34.937176   11205 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:05:34.943702   11205 out.go:177] * [kubernetes-upgrade-763000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:05:34.950695   11205 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 11:05:34.950748   11205 notify.go:220] Checking for updates...
	I1205 11:05:34.958659   11205 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 11:05:34.961574   11205 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:05:34.965810   11205 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:05:34.968646   11205 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	I1205 11:05:34.971635   11205 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:05:34.975088   11205 config.go:182] Loaded profile config "multinode-454000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:05:34.975162   11205 config.go:182] Loaded profile config "running-upgrade-829000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1205 11:05:34.975215   11205 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:05:34.979827   11205 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:05:34.986647   11205 start.go:297] selected driver: qemu2
	I1205 11:05:34.986653   11205 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:05:34.986658   11205 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:05:34.989153   11205 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:05:34.992698   11205 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:05:34.995702   11205 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 11:05:34.995719   11205 cni.go:84] Creating CNI manager for ""
	I1205 11:05:34.995743   11205 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1205 11:05:34.995776   11205 start.go:340] cluster config:
	{Name:kubernetes-upgrade-763000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-763000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:05:35.000508   11205 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:05:35.008652   11205 out.go:177] * Starting "kubernetes-upgrade-763000" primary control-plane node in "kubernetes-upgrade-763000" cluster
	I1205 11:05:35.012647   11205 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1205 11:05:35.012663   11205 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1205 11:05:35.012672   11205 cache.go:56] Caching tarball of preloaded images
	I1205 11:05:35.012742   11205 preload.go:172] Found /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:05:35.012759   11205 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1205 11:05:35.012812   11205 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/kubernetes-upgrade-763000/config.json ...
	I1205 11:05:35.012822   11205 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/kubernetes-upgrade-763000/config.json: {Name:mka9ef8627e8564a235df585e6b924f55f35f255 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:05:35.013188   11205 start.go:360] acquireMachinesLock for kubernetes-upgrade-763000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:05:35.013231   11205 start.go:364] duration metric: took 37.167µs to acquireMachinesLock for "kubernetes-upgrade-763000"
	I1205 11:05:35.013241   11205 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-763000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-763000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:05:35.013266   11205 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:05:35.016620   11205 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 11:05:35.042148   11205 start.go:159] libmachine.API.Create for "kubernetes-upgrade-763000" (driver="qemu2")
	I1205 11:05:35.042179   11205 client.go:168] LocalClient.Create starting
	I1205 11:05:35.042261   11205 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 11:05:35.042299   11205 main.go:141] libmachine: Decoding PEM data...
	I1205 11:05:35.042310   11205 main.go:141] libmachine: Parsing certificate...
	I1205 11:05:35.042350   11205 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 11:05:35.042379   11205 main.go:141] libmachine: Decoding PEM data...
	I1205 11:05:35.042390   11205 main.go:141] libmachine: Parsing certificate...
	I1205 11:05:35.042767   11205 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:05:35.214153   11205 main.go:141] libmachine: Creating SSH key...
	I1205 11:05:35.283892   11205 main.go:141] libmachine: Creating Disk image...
	I1205 11:05:35.283898   11205 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:05:35.284111   11205 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubernetes-upgrade-763000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubernetes-upgrade-763000/disk.qcow2
	I1205 11:05:35.300188   11205 main.go:141] libmachine: STDOUT: 
	I1205 11:05:35.300212   11205 main.go:141] libmachine: STDERR: 
	I1205 11:05:35.300287   11205 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubernetes-upgrade-763000/disk.qcow2 +20000M
	I1205 11:05:35.309076   11205 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:05:35.309092   11205 main.go:141] libmachine: STDERR: 
	I1205 11:05:35.309109   11205 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubernetes-upgrade-763000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubernetes-upgrade-763000/disk.qcow2
	I1205 11:05:35.309115   11205 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:05:35.309126   11205 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:05:35.309153   11205 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubernetes-upgrade-763000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubernetes-upgrade-763000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubernetes-upgrade-763000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:a5:29:91:14:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubernetes-upgrade-763000/disk.qcow2
	I1205 11:05:35.311028   11205 main.go:141] libmachine: STDOUT: 
	I1205 11:05:35.311040   11205 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:05:35.311059   11205 client.go:171] duration metric: took 268.875208ms to LocalClient.Create
	I1205 11:05:37.313165   11205 start.go:128] duration metric: took 2.299908666s to createHost
	I1205 11:05:37.313229   11205 start.go:83] releasing machines lock for "kubernetes-upgrade-763000", held for 2.300015083s
	W1205 11:05:37.313266   11205 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:05:37.329760   11205 out.go:177] * Deleting "kubernetes-upgrade-763000" in qemu2 ...
	W1205 11:05:37.346945   11205 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:05:37.346965   11205 start.go:729] Will try again in 5 seconds ...
	I1205 11:05:42.349148   11205 start.go:360] acquireMachinesLock for kubernetes-upgrade-763000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:05:42.349695   11205 start.go:364] duration metric: took 433.416µs to acquireMachinesLock for "kubernetes-upgrade-763000"
	I1205 11:05:42.349784   11205 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-763000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-763000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:05:42.350047   11205 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:05:42.359604   11205 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 11:05:42.398281   11205 start.go:159] libmachine.API.Create for "kubernetes-upgrade-763000" (driver="qemu2")
	I1205 11:05:42.398330   11205 client.go:168] LocalClient.Create starting
	I1205 11:05:42.398469   11205 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 11:05:42.398552   11205 main.go:141] libmachine: Decoding PEM data...
	I1205 11:05:42.398566   11205 main.go:141] libmachine: Parsing certificate...
	I1205 11:05:42.398617   11205 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 11:05:42.398667   11205 main.go:141] libmachine: Decoding PEM data...
	I1205 11:05:42.398680   11205 main.go:141] libmachine: Parsing certificate...
	I1205 11:05:42.399258   11205 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:05:42.569169   11205 main.go:141] libmachine: Creating SSH key...
	I1205 11:05:42.666611   11205 main.go:141] libmachine: Creating Disk image...
	I1205 11:05:42.666623   11205 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:05:42.666857   11205 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubernetes-upgrade-763000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubernetes-upgrade-763000/disk.qcow2
	I1205 11:05:42.677359   11205 main.go:141] libmachine: STDOUT: 
	I1205 11:05:42.677377   11205 main.go:141] libmachine: STDERR: 
	I1205 11:05:42.677446   11205 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubernetes-upgrade-763000/disk.qcow2 +20000M
	I1205 11:05:42.686066   11205 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:05:42.686083   11205 main.go:141] libmachine: STDERR: 
	I1205 11:05:42.686102   11205 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubernetes-upgrade-763000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubernetes-upgrade-763000/disk.qcow2
	I1205 11:05:42.686106   11205 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:05:42.686118   11205 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:05:42.686147   11205 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubernetes-upgrade-763000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubernetes-upgrade-763000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubernetes-upgrade-763000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:7d:19:81:09:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubernetes-upgrade-763000/disk.qcow2
	I1205 11:05:42.688044   11205 main.go:141] libmachine: STDOUT: 
	I1205 11:05:42.688058   11205 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:05:42.688072   11205 client.go:171] duration metric: took 289.737417ms to LocalClient.Create
	I1205 11:05:44.690201   11205 start.go:128] duration metric: took 2.340147375s to createHost
	I1205 11:05:44.690250   11205 start.go:83] releasing machines lock for "kubernetes-upgrade-763000", held for 2.340539334s
	W1205 11:05:44.690477   11205 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-763000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-763000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:05:44.697234   11205 out.go:201] 
	W1205 11:05:44.704643   11205 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:05:44.704676   11205 out.go:270] * 
	* 
	W1205 11:05:44.708206   11205 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:05:44.718238   11205 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-763000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-763000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-763000: (2.042495875s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-763000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-763000 status --format={{.Host}}: exit status 7 (63.586458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-763000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-763000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.2007325s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-763000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-763000" primary control-plane node in "kubernetes-upgrade-763000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-763000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-763000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:05:46.869752   11239 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:05:46.869903   11239 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:05:46.869906   11239 out.go:358] Setting ErrFile to fd 2...
	I1205 11:05:46.869909   11239 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:05:46.870042   11239 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:05:46.871184   11239 out.go:352] Setting JSON to false
	I1205 11:05:46.890186   11239 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5718,"bootTime":1733419828,"procs":549,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 11:05:46.890268   11239 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:05:46.895801   11239 out.go:177] * [kubernetes-upgrade-763000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:05:46.903819   11239 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 11:05:46.903908   11239 notify.go:220] Checking for updates...
	I1205 11:05:46.911739   11239 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 11:05:46.915769   11239 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:05:46.919846   11239 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:05:46.922781   11239 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	I1205 11:05:46.925731   11239 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:05:46.929115   11239 config.go:182] Loaded profile config "kubernetes-upgrade-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1205 11:05:46.929372   11239 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:05:46.932740   11239 out.go:177] * Using the qemu2 driver based on existing profile
	I1205 11:05:46.939797   11239 start.go:297] selected driver: qemu2
	I1205 11:05:46.939802   11239 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-763000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-763000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:05:46.939839   11239 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:05:46.942401   11239 cni.go:84] Creating CNI manager for ""
	I1205 11:05:46.942432   11239 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:05:46.942458   11239 start.go:340] cluster config:
	{Name:kubernetes-upgrade-763000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-763000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:05:46.946450   11239 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:05:46.954829   11239 out.go:177] * Starting "kubernetes-upgrade-763000" primary control-plane node in "kubernetes-upgrade-763000" cluster
	I1205 11:05:46.957739   11239 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:05:46.957751   11239 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:05:46.957760   11239 cache.go:56] Caching tarball of preloaded images
	I1205 11:05:46.957819   11239 preload.go:172] Found /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:05:46.957824   11239 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:05:46.957870   11239 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/kubernetes-upgrade-763000/config.json ...
	I1205 11:05:46.958435   11239 start.go:360] acquireMachinesLock for kubernetes-upgrade-763000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:05:46.958467   11239 start.go:364] duration metric: took 25.625µs to acquireMachinesLock for "kubernetes-upgrade-763000"
	I1205 11:05:46.958476   11239 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:05:46.958481   11239 fix.go:54] fixHost starting: 
	I1205 11:05:46.958595   11239 fix.go:112] recreateIfNeeded on kubernetes-upgrade-763000: state=Stopped err=<nil>
	W1205 11:05:46.958602   11239 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:05:46.965669   11239 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-763000" ...
	I1205 11:05:46.969797   11239 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:05:46.969833   11239 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubernetes-upgrade-763000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubernetes-upgrade-763000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubernetes-upgrade-763000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:7d:19:81:09:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubernetes-upgrade-763000/disk.qcow2
	I1205 11:05:46.971831   11239 main.go:141] libmachine: STDOUT: 
	I1205 11:05:46.971845   11239 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:05:46.971874   11239 fix.go:56] duration metric: took 13.3935ms for fixHost
	I1205 11:05:46.971879   11239 start.go:83] releasing machines lock for "kubernetes-upgrade-763000", held for 13.407875ms
	W1205 11:05:46.971883   11239 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:05:46.971911   11239 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:05:46.971914   11239 start.go:729] Will try again in 5 seconds ...
	I1205 11:05:51.974056   11239 start.go:360] acquireMachinesLock for kubernetes-upgrade-763000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:05:51.974679   11239 start.go:364] duration metric: took 459.167µs to acquireMachinesLock for "kubernetes-upgrade-763000"
	I1205 11:05:51.974865   11239 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:05:51.974886   11239 fix.go:54] fixHost starting: 
	I1205 11:05:51.975677   11239 fix.go:112] recreateIfNeeded on kubernetes-upgrade-763000: state=Stopped err=<nil>
	W1205 11:05:51.975704   11239 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:05:51.984926   11239 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-763000" ...
	I1205 11:05:51.989096   11239 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:05:51.989480   11239 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubernetes-upgrade-763000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubernetes-upgrade-763000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubernetes-upgrade-763000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:7d:19:81:09:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubernetes-upgrade-763000/disk.qcow2
	I1205 11:05:52.000226   11239 main.go:141] libmachine: STDOUT: 
	I1205 11:05:52.000376   11239 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:05:52.000439   11239 fix.go:56] duration metric: took 25.555041ms for fixHost
	I1205 11:05:52.000456   11239 start.go:83] releasing machines lock for "kubernetes-upgrade-763000", held for 25.729375ms
	W1205 11:05:52.000663   11239 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-763000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-763000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:05:52.008030   11239 out.go:201] 
	W1205 11:05:52.011191   11239 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:05:52.011243   11239 out.go:270] * 
	* 
	W1205 11:05:52.014044   11239 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:05:52.023106   11239 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-763000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-763000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-763000 version --output=json: exit status 1 (65.27475ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-763000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-12-05 11:05:52.104279 -0800 PST m=+956.800500126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-763000 -n kubernetes-upgrade-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-763000 -n kubernetes-upgrade-763000: exit status 7 (37.000042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-763000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-763000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-763000
--- FAIL: TestKubernetesUpgrade (17.34s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (0.95s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=20052
- KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3289089593/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (0.95s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.13s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=20052
- KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2703143443/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.13s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (573.78s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2986091749 start -p stopped-upgrade-616000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2986091749 start -p stopped-upgrade-616000 --memory=2200 --vm-driver=qemu2 : (40.362467125s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2986091749 -p stopped-upgrade-616000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2986091749 -p stopped-upgrade-616000 stop: (12.106032834s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-616000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-616000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.2325255s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-616000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-616000" primary control-plane node in "stopped-upgrade-616000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-616000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:06:45.807067   11277 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:06:45.807233   11277 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:06:45.807236   11277 out.go:358] Setting ErrFile to fd 2...
	I1205 11:06:45.807239   11277 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:06:45.807371   11277 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:06:45.808569   11277 out.go:352] Setting JSON to false
	I1205 11:06:45.828571   11277 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5777,"bootTime":1733419828,"procs":549,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 11:06:45.828643   11277 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:06:45.832761   11277 out.go:177] * [stopped-upgrade-616000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:06:45.840834   11277 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 11:06:45.840884   11277 notify.go:220] Checking for updates...
	I1205 11:06:45.848827   11277 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 11:06:45.852729   11277 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:06:45.855736   11277 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:06:45.858775   11277 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	I1205 11:06:45.861741   11277 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:06:45.865085   11277 config.go:182] Loaded profile config "stopped-upgrade-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1205 11:06:45.868775   11277 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1205 11:06:45.871752   11277 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:06:45.875794   11277 out.go:177] * Using the qemu2 driver based on existing profile
	I1205 11:06:45.881811   11277 start.go:297] selected driver: qemu2
	I1205 11:06:45.881819   11277 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-616000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52022 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-616000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1205 11:06:45.881878   11277 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:06:45.884743   11277 cni.go:84] Creating CNI manager for ""
	I1205 11:06:45.884774   11277 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:06:45.884802   11277 start.go:340] cluster config:
	{Name:stopped-upgrade-616000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52022 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-616000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1205 11:06:45.884857   11277 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:06:45.893761   11277 out.go:177] * Starting "stopped-upgrade-616000" primary control-plane node in "stopped-upgrade-616000" cluster
	I1205 11:06:45.897759   11277 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1205 11:06:45.897774   11277 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1205 11:06:45.897781   11277 cache.go:56] Caching tarball of preloaded images
	I1205 11:06:45.897853   11277 preload.go:172] Found /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:06:45.897858   11277 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1205 11:06:45.897922   11277 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/config.json ...
	I1205 11:06:45.898405   11277 start.go:360] acquireMachinesLock for stopped-upgrade-616000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:06:45.898449   11277 start.go:364] duration metric: took 38.584µs to acquireMachinesLock for "stopped-upgrade-616000"
	I1205 11:06:45.898457   11277 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:06:45.898461   11277 fix.go:54] fixHost starting: 
	I1205 11:06:45.898566   11277 fix.go:112] recreateIfNeeded on stopped-upgrade-616000: state=Stopped err=<nil>
	W1205 11:06:45.898572   11277 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:06:45.906765   11277 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-616000" ...
	I1205 11:06:45.910800   11277 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:06:45.910867   11277 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/stopped-upgrade-616000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/stopped-upgrade-616000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/stopped-upgrade-616000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51987-:22,hostfwd=tcp::51988-:2376,hostname=stopped-upgrade-616000 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/stopped-upgrade-616000/disk.qcow2
	I1205 11:06:45.957247   11277 main.go:141] libmachine: STDOUT: 
	I1205 11:06:45.957286   11277 main.go:141] libmachine: STDERR: 
	I1205 11:06:45.957294   11277 main.go:141] libmachine: Waiting for VM to start (ssh -p 51987 docker@127.0.0.1)...
	I1205 11:07:05.790114   11277 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/config.json ...
	I1205 11:07:05.790494   11277 machine.go:93] provisionDockerMachine start ...
	I1205 11:07:05.790589   11277 main.go:141] libmachine: Using SSH client type: native
	I1205 11:07:05.790821   11277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c6afc0] 0x102c6d800 <nil>  [] 0s} localhost 51987 <nil> <nil>}
	I1205 11:07:05.790829   11277 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 11:07:05.851394   11277 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 11:07:05.851409   11277 buildroot.go:166] provisioning hostname "stopped-upgrade-616000"
	I1205 11:07:05.851482   11277 main.go:141] libmachine: Using SSH client type: native
	I1205 11:07:05.851596   11277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c6afc0] 0x102c6d800 <nil>  [] 0s} localhost 51987 <nil> <nil>}
	I1205 11:07:05.851602   11277 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-616000 && echo "stopped-upgrade-616000" | sudo tee /etc/hostname
	I1205 11:07:05.912653   11277 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-616000
	
	I1205 11:07:05.912720   11277 main.go:141] libmachine: Using SSH client type: native
	I1205 11:07:05.912829   11277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c6afc0] 0x102c6d800 <nil>  [] 0s} localhost 51987 <nil> <nil>}
	I1205 11:07:05.912838   11277 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-616000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-616000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-616000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 11:07:05.975243   11277 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 11:07:05.975255   11277 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20052-8600/.minikube CaCertPath:/Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20052-8600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20052-8600/.minikube}
	I1205 11:07:05.975269   11277 buildroot.go:174] setting up certificates
	I1205 11:07:05.975273   11277 provision.go:84] configureAuth start
	I1205 11:07:05.975276   11277 provision.go:143] copyHostCerts
	I1205 11:07:05.975360   11277 exec_runner.go:144] found /Users/jenkins/minikube-integration/20052-8600/.minikube/ca.pem, removing ...
	I1205 11:07:05.975368   11277 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20052-8600/.minikube/ca.pem
	I1205 11:07:05.975496   11277 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20052-8600/.minikube/ca.pem (1082 bytes)
	I1205 11:07:05.975708   11277 exec_runner.go:144] found /Users/jenkins/minikube-integration/20052-8600/.minikube/cert.pem, removing ...
	I1205 11:07:05.975711   11277 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20052-8600/.minikube/cert.pem
	I1205 11:07:05.975767   11277 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20052-8600/.minikube/cert.pem (1123 bytes)
	I1205 11:07:05.975886   11277 exec_runner.go:144] found /Users/jenkins/minikube-integration/20052-8600/.minikube/key.pem, removing ...
	I1205 11:07:05.975889   11277 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20052-8600/.minikube/key.pem
	I1205 11:07:05.975939   11277 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20052-8600/.minikube/key.pem (1679 bytes)
	I1205 11:07:05.976034   11277 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-616000 san=[127.0.0.1 localhost minikube stopped-upgrade-616000]
	I1205 11:07:06.027537   11277 provision.go:177] copyRemoteCerts
	I1205 11:07:06.027587   11277 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 11:07:06.027594   11277 sshutil.go:53] new ssh client: &{IP:localhost Port:51987 SSHKeyPath:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/stopped-upgrade-616000/id_rsa Username:docker}
	I1205 11:07:06.058736   11277 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 11:07:06.065491   11277 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1205 11:07:06.072426   11277 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 11:07:06.079481   11277 provision.go:87] duration metric: took 104.205ms to configureAuth
	I1205 11:07:06.079490   11277 buildroot.go:189] setting minikube options for container-runtime
	I1205 11:07:06.079608   11277 config.go:182] Loaded profile config "stopped-upgrade-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1205 11:07:06.079654   11277 main.go:141] libmachine: Using SSH client type: native
	I1205 11:07:06.079738   11277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c6afc0] 0x102c6d800 <nil>  [] 0s} localhost 51987 <nil> <nil>}
	I1205 11:07:06.079743   11277 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1205 11:07:06.139653   11277 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1205 11:07:06.139663   11277 buildroot.go:70] root file system type: tmpfs
	I1205 11:07:06.139719   11277 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1205 11:07:06.139784   11277 main.go:141] libmachine: Using SSH client type: native
	I1205 11:07:06.139896   11277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c6afc0] 0x102c6d800 <nil>  [] 0s} localhost 51987 <nil> <nil>}
	I1205 11:07:06.139935   11277 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1205 11:07:06.205726   11277 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1205 11:07:06.205796   11277 main.go:141] libmachine: Using SSH client type: native
	I1205 11:07:06.205906   11277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c6afc0] 0x102c6d800 <nil>  [] 0s} localhost 51987 <nil> <nil>}
	I1205 11:07:06.205915   11277 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1205 11:07:06.570946   11277 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1205 11:07:06.570961   11277 machine.go:96] duration metric: took 780.467125ms to provisionDockerMachine
	I1205 11:07:06.570968   11277 start.go:293] postStartSetup for "stopped-upgrade-616000" (driver="qemu2")
	I1205 11:07:06.570975   11277 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 11:07:06.571051   11277 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 11:07:06.571061   11277 sshutil.go:53] new ssh client: &{IP:localhost Port:51987 SSHKeyPath:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/stopped-upgrade-616000/id_rsa Username:docker}
	I1205 11:07:06.602746   11277 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 11:07:06.604050   11277 info.go:137] Remote host: Buildroot 2021.02.12
	I1205 11:07:06.604058   11277 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20052-8600/.minikube/addons for local assets ...
	I1205 11:07:06.604151   11277 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20052-8600/.minikube/files for local assets ...
	I1205 11:07:06.604292   11277 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20052-8600/.minikube/files/etc/ssl/certs/91362.pem -> 91362.pem in /etc/ssl/certs
	I1205 11:07:06.604460   11277 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 11:07:06.607485   11277 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/files/etc/ssl/certs/91362.pem --> /etc/ssl/certs/91362.pem (1708 bytes)
	I1205 11:07:06.614699   11277 start.go:296] duration metric: took 43.726ms for postStartSetup
	I1205 11:07:06.614713   11277 fix.go:56] duration metric: took 20.716471834s for fixHost
	I1205 11:07:06.614756   11277 main.go:141] libmachine: Using SSH client type: native
	I1205 11:07:06.614854   11277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c6afc0] 0x102c6d800 <nil>  [] 0s} localhost 51987 <nil> <nil>}
	I1205 11:07:06.614865   11277 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 11:07:06.672418   11277 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733425626.957073212
	
	I1205 11:07:06.672429   11277 fix.go:216] guest clock: 1733425626.957073212
	I1205 11:07:06.672433   11277 fix.go:229] Guest: 2024-12-05 11:07:06.957073212 -0800 PST Remote: 2024-12-05 11:07:06.614715 -0800 PST m=+20.837003751 (delta=342.358212ms)
	I1205 11:07:06.672444   11277 fix.go:200] guest clock delta is within tolerance: 342.358212ms
	I1205 11:07:06.672448   11277 start.go:83] releasing machines lock for "stopped-upgrade-616000", held for 20.774214375s
	I1205 11:07:06.672527   11277 ssh_runner.go:195] Run: cat /version.json
	I1205 11:07:06.672528   11277 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 11:07:06.672539   11277 sshutil.go:53] new ssh client: &{IP:localhost Port:51987 SSHKeyPath:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/stopped-upgrade-616000/id_rsa Username:docker}
	I1205 11:07:06.672546   11277 sshutil.go:53] new ssh client: &{IP:localhost Port:51987 SSHKeyPath:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/stopped-upgrade-616000/id_rsa Username:docker}
	W1205 11:07:06.673158   11277 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51987: connect: connection refused
	I1205 11:07:06.673181   11277 retry.go:31] will retry after 177.300471ms: dial tcp [::1]:51987: connect: connection refused
	W1205 11:07:06.705013   11277 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1205 11:07:06.705065   11277 ssh_runner.go:195] Run: systemctl --version
	I1205 11:07:06.707050   11277 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 11:07:06.708936   11277 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 11:07:06.708976   11277 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1205 11:07:06.712004   11277 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1205 11:07:06.716734   11277 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 11:07:06.716744   11277 start.go:495] detecting cgroup driver to use...
	I1205 11:07:06.716814   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 11:07:06.723995   11277 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1205 11:07:06.727378   11277 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1205 11:07:06.730698   11277 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1205 11:07:06.730737   11277 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1205 11:07:06.733840   11277 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 11:07:06.736732   11277 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1205 11:07:06.740004   11277 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1205 11:07:06.743412   11277 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 11:07:06.746656   11277 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1205 11:07:06.749512   11277 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1205 11:07:06.752282   11277 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1205 11:07:06.755506   11277 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 11:07:06.758676   11277 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 11:07:06.761361   11277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 11:07:06.852396   11277 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1205 11:07:06.862244   11277 start.go:495] detecting cgroup driver to use...
	I1205 11:07:06.862336   11277 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1205 11:07:06.873694   11277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 11:07:06.889551   11277 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 11:07:06.906323   11277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 11:07:06.938253   11277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 11:07:06.943790   11277 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1205 11:07:07.009343   11277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1205 11:07:07.014550   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 11:07:07.020372   11277 ssh_runner.go:195] Run: which cri-dockerd
	I1205 11:07:07.021812   11277 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1205 11:07:07.024607   11277 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1205 11:07:07.029971   11277 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1205 11:07:07.101201   11277 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1205 11:07:07.182868   11277 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1205 11:07:07.182927   11277 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1205 11:07:07.188702   11277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 11:07:07.275341   11277 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1205 11:07:08.424376   11277 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.1490185s)
	I1205 11:07:08.424464   11277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1205 11:07:08.429354   11277 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1205 11:07:08.435647   11277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 11:07:08.440819   11277 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1205 11:07:08.524492   11277 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1205 11:07:08.595850   11277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 11:07:08.683514   11277 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1205 11:07:08.690069   11277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1205 11:07:08.694562   11277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 11:07:08.757763   11277 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1205 11:07:08.795631   11277 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1205 11:07:08.795718   11277 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1205 11:07:08.797717   11277 start.go:563] Will wait 60s for crictl version
	I1205 11:07:08.797779   11277 ssh_runner.go:195] Run: which crictl
	I1205 11:07:08.799249   11277 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 11:07:08.814813   11277 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1205 11:07:08.814890   11277 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 11:07:08.832322   11277 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1205 11:07:08.852196   11277 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1205 11:07:08.852286   11277 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1205 11:07:08.853705   11277 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 11:07:08.857347   11277 kubeadm.go:883] updating cluster {Name:stopped-upgrade-616000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52022 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-616000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1205 11:07:08.857395   11277 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1205 11:07:08.857443   11277 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1205 11:07:08.868126   11277 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1205 11:07:08.868135   11277 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1205 11:07:08.868195   11277 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1205 11:07:08.871802   11277 ssh_runner.go:195] Run: which lz4
	I1205 11:07:08.872980   11277 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 11:07:08.874284   11277 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 11:07:08.874292   11277 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1205 11:07:09.799856   11277 docker.go:653] duration metric: took 926.925916ms to copy over tarball
	I1205 11:07:09.799928   11277 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 11:07:10.971867   11277 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.171934834s)
	I1205 11:07:10.971882   11277 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 11:07:10.988209   11277 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1205 11:07:10.991843   11277 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1205 11:07:10.997262   11277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 11:07:11.075722   11277 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1205 11:07:12.631648   11277 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.555926333s)
	I1205 11:07:12.631743   11277 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1205 11:07:12.643041   11277 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1205 11:07:12.643052   11277 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1205 11:07:12.643057   11277 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 11:07:12.648898   11277 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 11:07:12.650777   11277 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1205 11:07:12.652461   11277 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1205 11:07:12.652459   11277 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 11:07:12.654179   11277 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1205 11:07:12.654376   11277 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1205 11:07:12.655991   11277 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1205 11:07:12.655962   11277 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1205 11:07:12.657305   11277 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1205 11:07:12.657384   11277 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1205 11:07:12.658589   11277 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1205 11:07:12.658714   11277 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1205 11:07:12.659875   11277 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1205 11:07:12.660006   11277 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1205 11:07:12.661032   11277 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1205 11:07:12.661741   11277 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1205 11:07:13.189827   11277 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1205 11:07:13.200419   11277 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1205 11:07:13.200455   11277 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1205 11:07:13.200515   11277 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1205 11:07:13.211501   11277 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1205 11:07:13.230857   11277 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1205 11:07:13.241540   11277 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1205 11:07:13.241576   11277 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1205 11:07:13.241644   11277 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1205 11:07:13.243159   11277 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1205 11:07:13.253432   11277 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1205 11:07:13.260907   11277 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1205 11:07:13.260930   11277 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1205 11:07:13.261001   11277 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1205 11:07:13.271406   11277 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1205 11:07:13.312809   11277 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1205 11:07:13.323288   11277 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1205 11:07:13.323313   11277 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1205 11:07:13.323379   11277 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1205 11:07:13.335240   11277 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1205 11:07:13.337903   11277 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1205 11:07:13.346951   11277 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1205 11:07:13.346975   11277 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1205 11:07:13.347052   11277 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1205 11:07:13.357071   11277 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1205 11:07:13.431160   11277 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1205 11:07:13.440908   11277 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1205 11:07:13.440927   11277 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1205 11:07:13.440987   11277 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1205 11:07:13.450757   11277 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1205 11:07:13.450901   11277 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1205 11:07:13.453265   11277 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1205 11:07:13.453278   11277 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1205 11:07:13.461409   11277 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1205 11:07:13.461417   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1205 11:07:13.486636   11277 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W1205 11:07:13.549466   11277 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1205 11:07:13.549621   11277 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1205 11:07:13.560640   11277 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1205 11:07:13.560666   11277 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1205 11:07:13.560734   11277 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1205 11:07:13.570818   11277 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1205 11:07:13.570969   11277 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1205 11:07:13.572445   11277 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1205 11:07:13.572457   11277 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1205 11:07:13.617167   11277 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1205 11:07:13.617180   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1205 11:07:13.656281   11277 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W1205 11:07:13.664214   11277 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1205 11:07:13.664349   11277 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 11:07:13.674668   11277 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1205 11:07:13.674696   11277 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 11:07:13.674757   11277 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 11:07:13.688899   11277 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 11:07:13.689038   11277 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1205 11:07:13.690351   11277 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1205 11:07:13.690363   11277 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1205 11:07:13.719544   11277 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 11:07:13.719557   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1205 11:07:13.959591   11277 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1205 11:07:13.959632   11277 cache_images.go:92] duration metric: took 1.316580958s to LoadCachedImages
	W1205 11:07:13.959672   11277 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I1205 11:07:13.959676   11277 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1205 11:07:13.959728   11277 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-616000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-616000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 11:07:13.959801   11277 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1205 11:07:13.973930   11277 cni.go:84] Creating CNI manager for ""
	I1205 11:07:13.973946   11277 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:07:13.973953   11277 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 11:07:13.973961   11277 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-616000 NodeName:stopped-upgrade-616000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 11:07:13.974046   11277 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-616000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 11:07:13.974125   11277 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1205 11:07:13.976878   11277 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 11:07:13.976915   11277 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 11:07:13.979721   11277 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1205 11:07:13.984912   11277 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 11:07:13.989696   11277 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1205 11:07:13.994792   11277 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1205 11:07:13.996041   11277 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 11:07:14.000076   11277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 11:07:14.080441   11277 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 11:07:14.089545   11277 certs.go:68] Setting up /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000 for IP: 10.0.2.15
	I1205 11:07:14.089558   11277 certs.go:194] generating shared ca certs ...
	I1205 11:07:14.089567   11277 certs.go:226] acquiring lock for ca certs: {Name:mk120c2a781c4636bd95493f524c24b1dcf3780a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:07:14.089759   11277 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20052-8600/.minikube/ca.key
	I1205 11:07:14.090523   11277 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20052-8600/.minikube/proxy-client-ca.key
	I1205 11:07:14.090531   11277 certs.go:256] generating profile certs ...
	I1205 11:07:14.090830   11277 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/client.key
	I1205 11:07:14.090850   11277 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/apiserver.key.80122213
	I1205 11:07:14.090859   11277 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/apiserver.crt.80122213 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1205 11:07:14.163734   11277 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/apiserver.crt.80122213 ...
	I1205 11:07:14.163753   11277 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/apiserver.crt.80122213: {Name:mk558acf8deae327405a8215bab480af41d675bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:07:14.164126   11277 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/apiserver.key.80122213 ...
	I1205 11:07:14.164131   11277 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/apiserver.key.80122213: {Name:mk6c029614b2bb5f744c5800561c046feb5faba9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:07:14.164314   11277 certs.go:381] copying /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/apiserver.crt.80122213 -> /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/apiserver.crt
	I1205 11:07:14.164444   11277 certs.go:385] copying /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/apiserver.key.80122213 -> /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/apiserver.key
	I1205 11:07:14.164759   11277 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/proxy-client.key
	I1205 11:07:14.164953   11277 certs.go:484] found cert: /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/9136.pem (1338 bytes)
	W1205 11:07:14.165168   11277 certs.go:480] ignoring /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/9136_empty.pem, impossibly tiny 0 bytes
	I1205 11:07:14.165176   11277 certs.go:484] found cert: /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca-key.pem (1675 bytes)
	I1205 11:07:14.165201   11277 certs.go:484] found cert: /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem (1082 bytes)
	I1205 11:07:14.165222   11277 certs.go:484] found cert: /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem (1123 bytes)
	I1205 11:07:14.165246   11277 certs.go:484] found cert: /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/key.pem (1679 bytes)
	I1205 11:07:14.165292   11277 certs.go:484] found cert: /Users/jenkins/minikube-integration/20052-8600/.minikube/files/etc/ssl/certs/91362.pem (1708 bytes)
	I1205 11:07:14.165681   11277 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 11:07:14.172429   11277 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 11:07:14.180127   11277 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 11:07:14.187031   11277 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 11:07:14.193536   11277 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1205 11:07:14.200238   11277 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 11:07:14.207484   11277 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 11:07:14.214981   11277 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 11:07:14.222304   11277 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/files/etc/ssl/certs/91362.pem --> /usr/share/ca-certificates/91362.pem (1708 bytes)
	I1205 11:07:14.229213   11277 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 11:07:14.235983   11277 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/9136.pem --> /usr/share/ca-certificates/9136.pem (1338 bytes)
	I1205 11:07:14.243219   11277 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 11:07:14.248290   11277 ssh_runner.go:195] Run: openssl version
	I1205 11:07:14.250181   11277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/91362.pem && ln -fs /usr/share/ca-certificates/91362.pem /etc/ssl/certs/91362.pem"
	I1205 11:07:14.253100   11277 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/91362.pem
	I1205 11:07:14.254435   11277 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 18:50 /usr/share/ca-certificates/91362.pem
	I1205 11:07:14.254462   11277 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/91362.pem
	I1205 11:07:14.256150   11277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/91362.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 11:07:14.259710   11277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 11:07:14.263071   11277 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 11:07:14.264457   11277 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 19:03 /usr/share/ca-certificates/minikubeCA.pem
	I1205 11:07:14.264486   11277 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 11:07:14.266207   11277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 11:07:14.269229   11277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9136.pem && ln -fs /usr/share/ca-certificates/9136.pem /etc/ssl/certs/9136.pem"
	I1205 11:07:14.272313   11277 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9136.pem
	I1205 11:07:14.273671   11277 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 18:50 /usr/share/ca-certificates/9136.pem
	I1205 11:07:14.273695   11277 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9136.pem
	I1205 11:07:14.275376   11277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9136.pem /etc/ssl/certs/51391683.0"
	I1205 11:07:14.278769   11277 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 11:07:14.280139   11277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 11:07:14.282971   11277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 11:07:14.285237   11277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 11:07:14.287339   11277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 11:07:14.289215   11277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 11:07:14.290989   11277 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 11:07:14.293124   11277 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-616000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52022 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-616000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1205 11:07:14.293215   11277 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1205 11:07:14.303255   11277 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 11:07:14.306729   11277 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 11:07:14.306739   11277 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 11:07:14.306770   11277 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 11:07:14.309594   11277 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 11:07:14.309886   11277 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-616000" does not appear in /Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 11:07:14.310013   11277 kubeconfig.go:62] /Users/jenkins/minikube-integration/20052-8600/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-616000" cluster setting kubeconfig missing "stopped-upgrade-616000" context setting]
	I1205 11:07:14.310209   11277 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20052-8600/kubeconfig: {Name:mkb6577356fc2312bf9b329fd967969d2d30b8a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:07:14.310625   11277 kapi.go:59] client config for stopped-upgrade-616000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/client.key", CAFile:"/Users/jenkins/minikube-integration/20052-8600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1046c7740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 11:07:14.311122   11277 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 11:07:14.313954   11277 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-616000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1205 11:07:14.313959   11277 kubeadm.go:1160] stopping kube-system containers ...
	I1205 11:07:14.314005   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1205 11:07:14.325294   11277 docker.go:483] Stopping containers: [bd0711c054c5 0275d18bc05a c475eaff13ec 1447f2c97140 b8c08aff7dab c744ec1de700 0279ac793008 d31b4a0b903b]
	I1205 11:07:14.325377   11277 ssh_runner.go:195] Run: docker stop bd0711c054c5 0275d18bc05a c475eaff13ec 1447f2c97140 b8c08aff7dab c744ec1de700 0279ac793008 d31b4a0b903b
	I1205 11:07:14.335694   11277 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 11:07:14.341573   11277 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 11:07:14.344270   11277 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 11:07:14.344275   11277 kubeadm.go:157] found existing configuration files:
	
	I1205 11:07:14.344300   11277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52022 /etc/kubernetes/admin.conf
	I1205 11:07:14.347512   11277 kubeadm.go:163] "https://control-plane.minikube.internal:52022" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52022 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 11:07:14.347543   11277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 11:07:14.350778   11277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52022 /etc/kubernetes/kubelet.conf
	I1205 11:07:14.353313   11277 kubeadm.go:163] "https://control-plane.minikube.internal:52022" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52022 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 11:07:14.353353   11277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 11:07:14.356007   11277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52022 /etc/kubernetes/controller-manager.conf
	I1205 11:07:14.359310   11277 kubeadm.go:163] "https://control-plane.minikube.internal:52022" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52022 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 11:07:14.359348   11277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 11:07:14.362286   11277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52022 /etc/kubernetes/scheduler.conf
	I1205 11:07:14.364627   11277 kubeadm.go:163] "https://control-plane.minikube.internal:52022" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52022 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 11:07:14.364652   11277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 11:07:14.367632   11277 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 11:07:14.370950   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 11:07:14.397134   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 11:07:14.744507   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 11:07:14.872788   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 11:07:14.896881   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 11:07:14.927413   11277 api_server.go:52] waiting for apiserver process to appear ...
	I1205 11:07:14.927503   11277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 11:07:15.429580   11277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 11:07:15.929634   11277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 11:07:15.934485   11277 api_server.go:72] duration metric: took 1.007083166s to wait for apiserver process to appear ...
	I1205 11:07:15.934496   11277 api_server.go:88] waiting for apiserver healthz status ...
	I1205 11:07:15.934511   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:07:20.936594   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:07:20.936661   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:07:25.936994   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:07:25.937018   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:07:30.937657   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:07:30.937682   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:07:35.938176   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:07:35.938217   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:07:40.938875   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:07:40.938917   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:07:45.939910   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:07:45.939942   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:07:50.940948   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:07:50.940992   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:07:55.943041   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:07:55.943069   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:08:00.945020   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:08:00.945049   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:08:05.947248   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:08:05.947288   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:08:10.949574   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:08:10.949612   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:08:15.951780   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:08:15.951947   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:08:15.966996   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:08:15.967089   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:08:15.984901   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:08:15.984976   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:08:16.000677   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:08:16.000762   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:08:16.011795   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:08:16.011880   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:08:16.021907   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:08:16.021986   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:08:16.032949   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:08:16.033030   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:08:16.043115   11277 logs.go:282] 0 containers: []
	W1205 11:08:16.043127   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:08:16.043194   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:08:16.053240   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:08:16.053266   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:08:16.053272   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:08:16.068701   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:08:16.068716   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:08:16.086251   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:08:16.086266   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:08:16.098392   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:08:16.098402   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:08:16.136960   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:08:16.136970   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:08:16.249812   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:08:16.249825   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:08:16.265819   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:08:16.265831   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:08:16.282641   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:08:16.282653   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:08:16.313692   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:08:16.313713   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:08:16.333469   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:08:16.333483   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:08:16.346792   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:08:16.346804   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:08:16.359175   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:08:16.359192   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:08:16.384925   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:08:16.384945   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:08:16.389957   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:08:16.389968   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:08:16.405710   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:08:16.405724   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:08:16.418974   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:08:16.418988   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:08:18.935179   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:08:23.935886   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:08:23.936224   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:08:23.965642   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:08:23.965796   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:08:23.983718   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:08:23.983827   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:08:24.000881   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:08:24.000963   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:08:24.012619   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:08:24.012701   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:08:24.023667   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:08:24.023740   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:08:24.034018   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:08:24.034092   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:08:24.044890   11277 logs.go:282] 0 containers: []
	W1205 11:08:24.044900   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:08:24.044962   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:08:24.055874   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:08:24.055896   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:08:24.055902   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:08:24.081625   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:08:24.081635   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:08:24.096139   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:08:24.096148   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:08:24.108994   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:08:24.109008   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:08:24.147385   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:08:24.147393   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:08:24.151246   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:08:24.151254   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:08:24.165105   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:08:24.165118   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:08:24.179988   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:08:24.179999   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:08:24.191088   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:08:24.191100   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:08:24.202548   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:08:24.202560   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:08:24.214488   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:08:24.214499   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:08:24.249742   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:08:24.249752   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:08:24.263557   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:08:24.263566   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:08:24.288132   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:08:24.288142   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:08:24.299607   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:08:24.299618   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:08:24.323133   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:08:24.323145   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:08:26.838005   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:08:31.840267   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:08:31.840586   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:08:31.869908   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:08:31.870052   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:08:31.888014   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:08:31.888128   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:08:31.901356   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:08:31.901446   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:08:31.912532   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:08:31.912606   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:08:31.922901   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:08:31.922982   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:08:31.933806   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:08:31.933892   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:08:31.943919   11277 logs.go:282] 0 containers: []
	W1205 11:08:31.943932   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:08:31.943993   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:08:31.958272   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:08:31.958289   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:08:31.958294   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:08:31.962724   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:08:31.962730   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:08:31.976552   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:08:31.976563   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:08:31.991312   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:08:31.991324   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:08:32.029947   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:08:32.029962   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:08:32.062023   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:08:32.062034   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:08:32.073543   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:08:32.073554   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:08:32.090437   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:08:32.090446   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:08:32.103243   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:08:32.103254   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:08:32.115063   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:08:32.115074   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:08:32.140174   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:08:32.140185   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:08:32.152001   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:08:32.152019   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:08:32.191475   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:08:32.191487   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:08:32.205558   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:08:32.205571   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:08:32.216896   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:08:32.216907   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:08:32.228479   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:08:32.228490   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:08:34.745191   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:08:39.745873   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:08:39.746027   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:08:39.756930   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:08:39.757013   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:08:39.767744   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:08:39.767827   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:08:39.778117   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:08:39.778196   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:08:39.788422   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:08:39.788508   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:08:39.803396   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:08:39.803471   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:08:39.814060   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:08:39.814134   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:08:39.824643   11277 logs.go:282] 0 containers: []
	W1205 11:08:39.824655   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:08:39.824712   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:08:39.836250   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:08:39.836269   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:08:39.836275   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:08:39.874531   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:08:39.874542   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:08:39.888741   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:08:39.888755   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:08:39.903842   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:08:39.903854   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:08:39.916931   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:08:39.916941   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:08:39.941112   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:08:39.941120   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:08:39.953204   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:08:39.953215   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:08:39.966705   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:08:39.966719   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:08:39.984071   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:08:39.984080   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:08:39.988090   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:08:39.988097   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:08:40.027074   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:08:40.027085   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:08:40.039233   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:08:40.039244   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:08:40.054318   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:08:40.054329   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:08:40.069516   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:08:40.069527   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:08:40.096123   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:08:40.096134   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:08:40.110744   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:08:40.110754   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:08:42.632086   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:08:47.634434   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:08:47.634737   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:08:47.658400   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:08:47.658517   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:08:47.674620   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:08:47.674708   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:08:47.699918   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:08:47.699999   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:08:47.710872   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:08:47.710963   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:08:47.721869   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:08:47.721944   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:08:47.732923   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:08:47.733004   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:08:47.743023   11277 logs.go:282] 0 containers: []
	W1205 11:08:47.743034   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:08:47.743097   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:08:47.753764   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:08:47.753784   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:08:47.753790   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:08:47.779669   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:08:47.779680   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:08:47.791428   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:08:47.791442   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:08:47.827279   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:08:47.827287   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:08:47.850165   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:08:47.850175   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:08:47.867351   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:08:47.867361   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:08:47.892396   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:08:47.892402   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:08:47.896383   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:08:47.896391   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:08:47.916578   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:08:47.916588   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:08:47.931315   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:08:47.931327   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:08:47.946078   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:08:47.946090   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:08:47.958053   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:08:47.958064   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:08:47.994790   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:08:47.994801   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:08:48.012537   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:08:48.012548   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:08:48.023980   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:08:48.023989   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:08:48.036577   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:08:48.036589   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:08:50.551094   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:08:55.553458   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:08:55.553665   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:08:55.568373   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:08:55.568458   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:08:55.579716   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:08:55.579798   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:08:55.590107   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:08:55.590185   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:08:55.600831   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:08:55.600912   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:08:55.611175   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:08:55.611257   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:08:55.621743   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:08:55.621825   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:08:55.631952   11277 logs.go:282] 0 containers: []
	W1205 11:08:55.631965   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:08:55.632031   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:08:55.642107   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:08:55.642124   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:08:55.642129   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:08:55.679403   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:08:55.679412   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:08:55.683429   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:08:55.683434   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:08:55.705025   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:08:55.705037   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:08:55.716817   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:08:55.716827   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:08:55.731642   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:08:55.731653   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:08:55.749000   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:08:55.749010   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:08:55.761432   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:08:55.761444   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:08:55.787035   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:08:55.787046   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:08:55.798330   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:08:55.798341   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:08:55.832561   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:08:55.832572   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:08:55.846546   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:08:55.846558   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:08:55.867280   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:08:55.867290   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:08:55.879132   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:08:55.879143   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:08:55.892328   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:08:55.892338   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:08:55.904048   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:08:55.904058   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:08:58.429702   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:09:03.432332   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:09:03.432490   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:09:03.443978   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:09:03.444072   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:09:03.454424   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:09:03.454515   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:09:03.465393   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:09:03.465468   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:09:03.475748   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:09:03.475826   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:09:03.486521   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:09:03.486602   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:09:03.497784   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:09:03.497856   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:09:03.507577   11277 logs.go:282] 0 containers: []
	W1205 11:09:03.507589   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:09:03.507652   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:09:03.518587   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:09:03.518605   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:09:03.518611   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:09:03.557262   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:09:03.557275   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:09:03.569188   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:09:03.569200   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:09:03.588402   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:09:03.588414   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:09:03.602068   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:09:03.602080   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:09:03.617604   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:09:03.617614   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:09:03.632537   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:09:03.632548   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:09:03.647929   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:09:03.647942   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:09:03.652287   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:09:03.652293   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:09:03.690534   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:09:03.690546   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:09:03.704979   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:09:03.704991   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:09:03.716934   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:09:03.716949   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:09:03.736742   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:09:03.736752   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:09:03.762626   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:09:03.762642   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:09:03.801929   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:09:03.801946   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:09:03.813797   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:09:03.813810   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:09:06.327743   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:09:11.330443   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:09:11.330637   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:09:11.343807   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:09:11.343902   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:09:11.354910   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:09:11.354993   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:09:11.365780   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:09:11.365857   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:09:11.376495   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:09:11.376581   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:09:11.386978   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:09:11.387064   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:09:11.397385   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:09:11.397463   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:09:11.408004   11277 logs.go:282] 0 containers: []
	W1205 11:09:11.408017   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:09:11.408085   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:09:11.418649   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:09:11.418666   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:09:11.418673   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:09:11.460242   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:09:11.460251   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:09:11.474345   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:09:11.474358   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:09:11.491562   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:09:11.491571   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:09:11.495862   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:09:11.495867   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:09:11.526743   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:09:11.526766   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:09:11.541377   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:09:11.541388   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:09:11.554422   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:09:11.554433   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:09:11.569384   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:09:11.569393   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:09:11.592391   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:09:11.592402   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:09:11.617369   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:09:11.617378   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:09:11.656086   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:09:11.656098   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:09:11.667628   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:09:11.667639   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:09:11.679461   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:09:11.679474   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:09:11.692049   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:09:11.692059   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:09:11.706589   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:09:11.706600   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:09:14.220544   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:09:19.223129   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:09:19.223262   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:09:19.234947   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:09:19.235039   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:09:19.246532   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:09:19.246612   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:09:19.257505   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:09:19.257579   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:09:19.268986   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:09:19.269067   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:09:19.284397   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:09:19.284474   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:09:19.295114   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:09:19.295191   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:09:19.305655   11277 logs.go:282] 0 containers: []
	W1205 11:09:19.305667   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:09:19.305733   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:09:19.322043   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:09:19.322066   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:09:19.322075   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:09:19.336211   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:09:19.336223   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:09:19.347684   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:09:19.347694   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:09:19.371042   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:09:19.371055   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:09:19.382639   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:09:19.382649   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:09:19.419656   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:09:19.419672   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:09:19.454364   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:09:19.454376   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:09:19.468453   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:09:19.468463   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:09:19.493647   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:09:19.493660   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:09:19.508590   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:09:19.508600   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:09:19.529591   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:09:19.529603   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:09:19.541707   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:09:19.541718   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:09:19.546626   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:09:19.546633   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:09:19.560829   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:09:19.560842   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:09:19.572056   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:09:19.572069   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:09:19.583574   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:09:19.583584   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:09:22.151210   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:09:27.153624   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:09:27.153749   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:09:27.166108   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:09:27.166177   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:09:27.178334   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:09:27.178404   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:09:27.189380   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:09:27.189460   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:09:27.200238   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:09:27.200319   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:09:27.211113   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:09:27.211189   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:09:27.222191   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:09:27.222277   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:09:27.232543   11277 logs.go:282] 0 containers: []
	W1205 11:09:27.232556   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:09:27.232623   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:09:27.247815   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:09:27.247835   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:09:27.247841   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:09:27.283453   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:09:27.283465   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:09:27.297488   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:09:27.297498   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:09:27.322976   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:09:27.322988   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:09:27.336918   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:09:27.336927   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:09:27.349360   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:09:27.349375   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:09:27.385891   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:09:27.385900   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:09:27.402647   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:09:27.402657   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:09:27.416374   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:09:27.416386   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:09:27.440981   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:09:27.440989   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:09:27.455612   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:09:27.455624   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:09:27.467459   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:09:27.467470   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:09:27.479274   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:09:27.479285   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:09:27.483845   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:09:27.483853   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:09:27.498797   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:09:27.498808   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:09:27.510708   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:09:27.510719   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:09:30.025043   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:09:35.027372   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:09:35.027641   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:09:35.056614   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:09:35.056741   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:09:35.074148   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:09:35.074239   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:09:35.086113   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:09:35.086200   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:09:35.097093   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:09:35.097173   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:09:35.108025   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:09:35.108113   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:09:35.122935   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:09:35.123010   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:09:35.133639   11277 logs.go:282] 0 containers: []
	W1205 11:09:35.133650   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:09:35.133721   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:09:35.143923   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:09:35.143942   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:09:35.143948   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:09:35.180201   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:09:35.180214   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:09:35.194774   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:09:35.194785   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:09:35.215816   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:09:35.215826   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:09:35.228021   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:09:35.228032   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:09:35.245268   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:09:35.245280   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:09:35.269764   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:09:35.269774   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:09:35.291083   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:09:35.291097   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:09:35.306463   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:09:35.306476   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:09:35.318334   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:09:35.318345   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:09:35.335161   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:09:35.335170   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:09:35.361067   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:09:35.361082   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:09:35.365289   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:09:35.365294   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:09:35.400761   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:09:35.400772   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:09:35.412670   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:09:35.412691   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:09:35.426272   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:09:35.426283   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:09:37.939973   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:09:42.942273   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:09:42.942428   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:09:42.955192   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:09:42.955286   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:09:42.966190   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:09:42.966262   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:09:42.976475   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:09:42.976553   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:09:42.987079   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:09:42.987149   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:09:42.997940   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:09:42.998018   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:09:43.012794   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:09:43.012883   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:09:43.024000   11277 logs.go:282] 0 containers: []
	W1205 11:09:43.024012   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:09:43.024075   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:09:43.036048   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:09:43.036069   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:09:43.036075   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:09:43.074512   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:09:43.074523   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:09:43.088890   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:09:43.088900   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:09:43.100051   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:09:43.100062   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:09:43.112129   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:09:43.112140   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:09:43.125327   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:09:43.125337   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:09:43.136832   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:09:43.136842   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:09:43.174260   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:09:43.174272   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:09:43.179290   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:09:43.179298   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:09:43.197039   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:09:43.197051   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:09:43.211980   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:09:43.211992   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:09:43.236987   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:09:43.236996   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:09:43.261295   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:09:43.261305   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:09:43.276902   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:09:43.276913   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:09:43.288421   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:09:43.288433   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:09:43.312632   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:09:43.312644   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:09:45.832963   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:09:50.834129   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:09:50.834334   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:09:50.850597   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:09:50.850703   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:09:50.863338   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:09:50.863415   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:09:50.874116   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:09:50.874192   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:09:50.888527   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:09:50.888612   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:09:50.899791   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:09:50.899869   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:09:50.910440   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:09:50.910510   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:09:50.920225   11277 logs.go:282] 0 containers: []
	W1205 11:09:50.920237   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:09:50.920298   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:09:50.931078   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:09:50.931095   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:09:50.931100   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:09:50.942657   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:09:50.942669   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:09:50.955857   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:09:50.955871   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:09:50.970469   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:09:50.970481   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:09:50.982138   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:09:50.982148   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:09:51.006967   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:09:51.006980   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:09:51.019295   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:09:51.019305   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:09:51.037124   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:09:51.037137   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:09:51.061699   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:09:51.061709   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:09:51.066113   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:09:51.066122   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:09:51.106428   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:09:51.106442   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:09:51.120404   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:09:51.120418   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:09:51.137145   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:09:51.137155   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:09:51.152498   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:09:51.152513   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:09:51.164443   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:09:51.164459   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:09:51.202551   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:09:51.202571   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:09:53.719288   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:09:58.719761   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:09:58.720020   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:09:58.739565   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:09:58.739678   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:09:58.753717   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:09:58.753805   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:09:58.765964   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:09:58.766047   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:09:58.781214   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:09:58.781294   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:09:58.792019   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:09:58.792102   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:09:58.803445   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:09:58.803524   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:09:58.814142   11277 logs.go:282] 0 containers: []
	W1205 11:09:58.814154   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:09:58.814223   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:09:58.824461   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:09:58.824482   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:09:58.824487   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:09:58.864637   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:09:58.864652   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:09:58.879027   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:09:58.879037   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:09:58.891580   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:09:58.891590   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:09:58.915630   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:09:58.915638   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:09:58.920112   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:09:58.920117   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:09:58.934264   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:09:58.934275   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:09:58.945484   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:09:58.945495   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:09:58.956672   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:09:58.956682   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:09:58.968705   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:09:58.968714   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:09:58.983659   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:09:58.983670   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:09:58.998800   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:09:58.998811   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:09:59.011068   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:09:59.011078   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:09:59.028192   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:09:59.028202   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:09:59.064919   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:09:59.064934   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:09:59.089618   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:09:59.089628   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:10:01.603534   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:10:06.605847   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:10:06.606140   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:10:06.631762   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:10:06.631909   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:10:06.648593   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:10:06.648679   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:10:06.662080   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:10:06.662163   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:10:06.673730   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:10:06.673803   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:10:06.684061   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:10:06.684140   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:10:06.694542   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:10:06.694611   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:10:06.704517   11277 logs.go:282] 0 containers: []
	W1205 11:10:06.704530   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:10:06.704598   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:10:06.721159   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:10:06.721177   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:10:06.721184   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:10:06.757646   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:10:06.757657   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:10:06.784268   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:10:06.784280   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:10:06.800304   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:10:06.800319   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:10:06.812105   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:10:06.812117   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:10:06.829645   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:10:06.829655   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:10:06.842598   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:10:06.842608   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:10:06.847672   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:10:06.847678   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:10:06.862242   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:10:06.862251   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:10:06.876664   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:10:06.876678   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:10:06.900471   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:10:06.900479   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:10:06.912269   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:10:06.912280   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:10:06.946287   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:10:06.946302   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:10:06.960395   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:10:06.960405   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:10:06.972303   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:10:06.972316   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:10:06.983710   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:10:06.983721   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:10:09.498597   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:10:14.500810   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:10:14.500987   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:10:14.512730   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:10:14.512811   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:10:14.523505   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:10:14.523585   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:10:14.534111   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:10:14.534185   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:10:14.544126   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:10:14.544202   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:10:14.554503   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:10:14.554588   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:10:14.565092   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:10:14.565175   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:10:14.575183   11277 logs.go:282] 0 containers: []
	W1205 11:10:14.575194   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:10:14.575262   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:10:14.585231   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:10:14.585247   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:10:14.585253   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:10:14.596906   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:10:14.596917   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:10:14.610980   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:10:14.610991   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:10:14.652474   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:10:14.652486   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:10:14.684979   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:10:14.684991   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:10:14.702399   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:10:14.702411   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:10:14.716848   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:10:14.716859   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:10:14.728824   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:10:14.728835   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:10:14.743673   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:10:14.743685   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:10:14.755767   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:10:14.755779   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:10:14.760417   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:10:14.760424   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:10:14.774244   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:10:14.774255   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:10:14.785972   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:10:14.785985   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:10:14.798750   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:10:14.798761   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:10:14.823397   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:10:14.823405   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:10:14.862332   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:10:14.862340   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:10:17.378948   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:10:22.381335   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:10:22.381553   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:10:22.398790   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:10:22.398895   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:10:22.412679   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:10:22.412764   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:10:22.424431   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:10:22.424516   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:10:22.435465   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:10:22.435552   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:10:22.445942   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:10:22.446014   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:10:22.456568   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:10:22.456644   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:10:22.466047   11277 logs.go:282] 0 containers: []
	W1205 11:10:22.466059   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:10:22.466128   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:10:22.476283   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:10:22.476300   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:10:22.476305   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:10:22.514893   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:10:22.514904   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:10:22.519276   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:10:22.519282   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:10:22.531380   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:10:22.531394   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:10:22.545062   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:10:22.545072   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:10:22.557254   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:10:22.557263   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:10:22.579875   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:10:22.579884   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:10:22.615257   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:10:22.615271   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:10:22.630018   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:10:22.630028   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:10:22.650959   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:10:22.650976   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:10:22.676388   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:10:22.676401   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:10:22.692192   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:10:22.692207   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:10:22.707157   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:10:22.707172   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:10:22.721219   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:10:22.721235   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:10:22.732816   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:10:22.732827   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:10:22.744218   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:10:22.744229   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:10:25.258001   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:10:30.259696   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:10:30.259913   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:10:30.284096   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:10:30.284205   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:10:30.299017   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:10:30.299109   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:10:30.312564   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:10:30.312635   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:10:30.323885   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:10:30.323960   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:10:30.349573   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:10:30.349652   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:10:30.369457   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:10:30.369541   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:10:30.384064   11277 logs.go:282] 0 containers: []
	W1205 11:10:30.384080   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:10:30.384156   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:10:30.394315   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:10:30.394333   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:10:30.394339   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:10:30.433052   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:10:30.433064   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:10:30.447674   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:10:30.447682   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:10:30.458932   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:10:30.458943   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:10:30.478257   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:10:30.478269   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:10:30.492416   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:10:30.492431   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:10:30.521390   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:10:30.521404   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:10:30.535692   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:10:30.535704   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:10:30.547738   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:10:30.547747   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:10:30.562303   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:10:30.562313   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:10:30.566822   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:10:30.566829   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:10:30.589086   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:10:30.589099   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:10:30.602530   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:10:30.602540   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:10:30.614641   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:10:30.614653   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:10:30.652174   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:10:30.652180   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:10:30.664830   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:10:30.664839   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:10:33.190254   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:10:38.192751   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:10:38.193269   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:10:38.232611   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:10:38.232758   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:10:38.252813   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:10:38.252927   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:10:38.267550   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:10:38.267639   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:10:38.280006   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:10:38.280092   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:10:38.290530   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:10:38.290607   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:10:38.301660   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:10:38.301743   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:10:38.315093   11277 logs.go:282] 0 containers: []
	W1205 11:10:38.315103   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:10:38.315165   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:10:38.330422   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:10:38.330439   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:10:38.330446   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:10:38.349034   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:10:38.349043   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:10:38.364179   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:10:38.364191   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:10:38.376298   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:10:38.376310   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:10:38.401668   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:10:38.401684   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:10:38.447052   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:10:38.447068   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:10:38.473232   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:10:38.473246   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:10:38.484807   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:10:38.484820   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:10:38.496732   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:10:38.496742   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:10:38.520328   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:10:38.520338   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:10:38.534374   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:10:38.534387   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:10:38.548570   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:10:38.548580   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:10:38.566468   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:10:38.566478   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:10:38.571104   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:10:38.571114   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:10:38.607358   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:10:38.607372   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:10:38.619742   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:10:38.619753   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:10:41.133973   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:10:46.135698   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:10:46.135868   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:10:46.149371   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:10:46.149463   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:10:46.160381   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:10:46.160467   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:10:46.171214   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:10:46.171291   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:10:46.181460   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:10:46.181541   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:10:46.191861   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:10:46.191940   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:10:46.202696   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:10:46.202767   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:10:46.212879   11277 logs.go:282] 0 containers: []
	W1205 11:10:46.212897   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:10:46.212964   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:10:46.223728   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:10:46.223744   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:10:46.223749   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:10:46.237969   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:10:46.237984   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:10:46.249791   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:10:46.249802   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:10:46.288114   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:10:46.288127   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:10:46.322214   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:10:46.322228   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:10:46.337037   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:10:46.337050   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:10:46.348504   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:10:46.348518   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:10:46.352544   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:10:46.352553   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:10:46.364387   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:10:46.364400   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:10:46.379298   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:10:46.379310   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:10:46.390795   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:10:46.390806   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:10:46.414371   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:10:46.414388   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:10:46.427098   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:10:46.427108   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:10:46.454473   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:10:46.454487   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:10:46.470765   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:10:46.470776   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:10:46.489622   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:10:46.489632   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:10:49.005482   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:10:54.008150   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:10:54.008310   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:10:54.021067   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:10:54.021156   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:10:54.032329   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:10:54.032407   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:10:54.043496   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:10:54.043562   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:10:54.055615   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:10:54.055694   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:10:54.066326   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:10:54.066405   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:10:54.077354   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:10:54.077431   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:10:54.087747   11277 logs.go:282] 0 containers: []
	W1205 11:10:54.087762   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:10:54.087827   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:10:54.102008   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:10:54.102025   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:10:54.102031   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:10:54.124956   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:10:54.124964   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:10:54.138699   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:10:54.138709   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:10:54.150224   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:10:54.150237   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:10:54.166401   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:10:54.166411   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:10:54.178102   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:10:54.178114   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:10:54.196132   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:10:54.196146   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:10:54.208149   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:10:54.208159   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:10:54.212709   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:10:54.212716   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:10:54.248507   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:10:54.248518   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:10:54.273641   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:10:54.273652   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:10:54.288057   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:10:54.288066   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:10:54.300747   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:10:54.300759   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:10:54.315416   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:10:54.315427   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:10:54.339328   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:10:54.339337   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:10:54.355614   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:10:54.355624   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:10:56.895898   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:11:01.898358   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:11:01.898600   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:11:01.920823   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:11:01.920960   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:11:01.937137   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:11:01.937226   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:11:01.949894   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:11:01.949977   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:11:01.961058   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:11:01.961139   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:11:01.972602   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:11:01.972681   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:11:01.983474   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:11:01.983555   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:11:01.993880   11277 logs.go:282] 0 containers: []
	W1205 11:11:01.993893   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:11:01.993965   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:11:02.004264   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:11:02.004284   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:11:02.004291   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:11:02.018101   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:11:02.018112   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:11:02.032118   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:11:02.032129   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:11:02.043735   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:11:02.043745   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:11:02.082815   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:11:02.082828   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:11:02.087649   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:11:02.087657   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:11:02.122734   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:11:02.122745   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:11:02.136680   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:11:02.136690   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:11:02.151927   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:11:02.151938   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:11:02.168166   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:11:02.168175   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:11:02.191937   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:11:02.191949   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:11:02.203842   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:11:02.203852   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:11:02.229855   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:11:02.229865   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:11:02.241304   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:11:02.241315   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:11:02.253346   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:11:02.253361   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:11:02.273277   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:11:02.273286   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:11:04.788169   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:11:09.790436   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:11:09.790601   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:11:09.808014   11277 logs.go:282] 2 containers: [a418a84ef6cc bd0711c054c5]
	I1205 11:11:09.808116   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:11:09.822433   11277 logs.go:282] 2 containers: [f11347e6276b 0275d18bc05a]
	I1205 11:11:09.822521   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:11:09.834293   11277 logs.go:282] 1 containers: [3a9bcab2f998]
	I1205 11:11:09.834365   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:11:09.844769   11277 logs.go:282] 2 containers: [8db7fec5acc1 c475eaff13ec]
	I1205 11:11:09.844843   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:11:09.854812   11277 logs.go:282] 1 containers: [3d0daae0db77]
	I1205 11:11:09.854898   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:11:09.865490   11277 logs.go:282] 2 containers: [efb048aacb1b 1447f2c97140]
	I1205 11:11:09.865576   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:11:09.876321   11277 logs.go:282] 0 containers: []
	W1205 11:11:09.876355   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:11:09.876422   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:11:09.887256   11277 logs.go:282] 1 containers: [079f372fb960]
	I1205 11:11:09.887270   11277 logs.go:123] Gathering logs for etcd [f11347e6276b] ...
	I1205 11:11:09.887276   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f11347e6276b"
	I1205 11:11:09.901191   11277 logs.go:123] Gathering logs for kube-scheduler [c475eaff13ec] ...
	I1205 11:11:09.901201   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c475eaff13ec"
	I1205 11:11:09.916245   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:11:09.916256   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:11:09.929100   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:11:09.929110   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:11:09.933416   11277 logs.go:123] Gathering logs for kube-apiserver [a418a84ef6cc] ...
	I1205 11:11:09.933422   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a418a84ef6cc"
	I1205 11:11:09.947110   11277 logs.go:123] Gathering logs for coredns [3a9bcab2f998] ...
	I1205 11:11:09.947120   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9bcab2f998"
	I1205 11:11:09.966928   11277 logs.go:123] Gathering logs for kube-controller-manager [efb048aacb1b] ...
	I1205 11:11:09.966939   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efb048aacb1b"
	I1205 11:11:09.984725   11277 logs.go:123] Gathering logs for kube-apiserver [bd0711c054c5] ...
	I1205 11:11:09.984737   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd0711c054c5"
	I1205 11:11:10.011547   11277 logs.go:123] Gathering logs for etcd [0275d18bc05a] ...
	I1205 11:11:10.011562   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0275d18bc05a"
	I1205 11:11:10.025931   11277 logs.go:123] Gathering logs for kube-scheduler [8db7fec5acc1] ...
	I1205 11:11:10.025941   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8db7fec5acc1"
	I1205 11:11:10.037563   11277 logs.go:123] Gathering logs for kube-proxy [3d0daae0db77] ...
	I1205 11:11:10.037573   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0daae0db77"
	I1205 11:11:10.051819   11277 logs.go:123] Gathering logs for kube-controller-manager [1447f2c97140] ...
	I1205 11:11:10.051829   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1447f2c97140"
	I1205 11:11:10.064917   11277 logs.go:123] Gathering logs for storage-provisioner [079f372fb960] ...
	I1205 11:11:10.064927   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 079f372fb960"
	I1205 11:11:10.076561   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:11:10.076571   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:11:10.114947   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:11:10.114957   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:11:10.149899   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:11:10.149910   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:11:12.673527   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:11:17.675760   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:11:17.675849   11277 kubeadm.go:597] duration metric: took 4m3.317255291s to restartPrimaryControlPlane
	W1205 11:11:17.675892   11277 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 11:11:17.675916   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1205 11:11:18.746079   11277 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.07014925s)
	I1205 11:11:18.746163   11277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 11:11:18.751190   11277 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 11:11:18.754084   11277 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 11:11:18.756615   11277 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 11:11:18.756625   11277 kubeadm.go:157] found existing configuration files:
	
	I1205 11:11:18.756655   11277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52022 /etc/kubernetes/admin.conf
	I1205 11:11:18.759369   11277 kubeadm.go:163] "https://control-plane.minikube.internal:52022" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52022 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 11:11:18.759398   11277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 11:11:18.762399   11277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52022 /etc/kubernetes/kubelet.conf
	I1205 11:11:18.765110   11277 kubeadm.go:163] "https://control-plane.minikube.internal:52022" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52022 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 11:11:18.765147   11277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 11:11:18.768001   11277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52022 /etc/kubernetes/controller-manager.conf
	I1205 11:11:18.770995   11277 kubeadm.go:163] "https://control-plane.minikube.internal:52022" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52022 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 11:11:18.771021   11277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 11:11:18.773681   11277 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52022 /etc/kubernetes/scheduler.conf
	I1205 11:11:18.776315   11277 kubeadm.go:163] "https://control-plane.minikube.internal:52022" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52022 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 11:11:18.776343   11277 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 11:11:18.779541   11277 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 11:11:18.797980   11277 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1205 11:11:18.798009   11277 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 11:11:18.848162   11277 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 11:11:18.848306   11277 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 11:11:18.848363   11277 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 11:11:18.896182   11277 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 11:11:18.904337   11277 out.go:235]   - Generating certificates and keys ...
	I1205 11:11:18.904371   11277 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 11:11:18.904419   11277 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 11:11:18.904466   11277 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 11:11:18.904499   11277 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 11:11:18.904537   11277 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 11:11:18.904576   11277 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 11:11:18.904605   11277 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 11:11:18.904652   11277 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 11:11:18.904784   11277 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 11:11:18.904913   11277 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 11:11:18.904970   11277 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 11:11:18.905025   11277 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 11:11:19.055209   11277 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 11:11:19.088878   11277 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 11:11:19.230081   11277 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 11:11:19.265358   11277 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 11:11:19.296923   11277 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 11:11:19.297316   11277 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 11:11:19.297420   11277 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 11:11:19.381943   11277 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 11:11:19.390090   11277 out.go:235]   - Booting up control plane ...
	I1205 11:11:19.390153   11277 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 11:11:19.390197   11277 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 11:11:19.390231   11277 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 11:11:19.390288   11277 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 11:11:19.390363   11277 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 11:11:24.387237   11277 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.002480 seconds
	I1205 11:11:24.387318   11277 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 11:11:24.392466   11277 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 11:11:24.901199   11277 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 11:11:24.901312   11277 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-616000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 11:11:25.406334   11277 kubeadm.go:310] [bootstrap-token] Using token: r8icgo.cbvdhc0kia6v4pl5
	I1205 11:11:25.412496   11277 out.go:235]   - Configuring RBAC rules ...
	I1205 11:11:25.412566   11277 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 11:11:25.412627   11277 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 11:11:25.419282   11277 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 11:11:25.420312   11277 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 11:11:25.421274   11277 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 11:11:25.422147   11277 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 11:11:25.425289   11277 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 11:11:25.596453   11277 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 11:11:25.810445   11277 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 11:11:25.811224   11277 kubeadm.go:310] 
	I1205 11:11:25.811260   11277 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 11:11:25.811281   11277 kubeadm.go:310] 
	I1205 11:11:25.811359   11277 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 11:11:25.811364   11277 kubeadm.go:310] 
	I1205 11:11:25.811377   11277 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 11:11:25.811479   11277 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 11:11:25.811513   11277 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 11:11:25.811516   11277 kubeadm.go:310] 
	I1205 11:11:25.811639   11277 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 11:11:25.811647   11277 kubeadm.go:310] 
	I1205 11:11:25.811687   11277 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 11:11:25.811694   11277 kubeadm.go:310] 
	I1205 11:11:25.811725   11277 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 11:11:25.811788   11277 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 11:11:25.811837   11277 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 11:11:25.811839   11277 kubeadm.go:310] 
	I1205 11:11:25.811897   11277 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 11:11:25.811938   11277 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 11:11:25.811948   11277 kubeadm.go:310] 
	I1205 11:11:25.811988   11277 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token r8icgo.cbvdhc0kia6v4pl5 \
	I1205 11:11:25.812046   11277 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:88a7dd8c9efc476dd67085474405097045d2c1786f9e8e2a034455d9e105c30a \
	I1205 11:11:25.812060   11277 kubeadm.go:310] 	--control-plane 
	I1205 11:11:25.812063   11277 kubeadm.go:310] 
	I1205 11:11:25.812103   11277 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 11:11:25.812110   11277 kubeadm.go:310] 
	I1205 11:11:25.812147   11277 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token r8icgo.cbvdhc0kia6v4pl5 \
	I1205 11:11:25.812210   11277 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:88a7dd8c9efc476dd67085474405097045d2c1786f9e8e2a034455d9e105c30a 
	I1205 11:11:25.812271   11277 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 11:11:25.812283   11277 cni.go:84] Creating CNI manager for ""
	I1205 11:11:25.812291   11277 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:11:25.816949   11277 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 11:11:25.822927   11277 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 11:11:25.826576   11277 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 11:11:25.831616   11277 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 11:11:25.831680   11277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 11:11:25.831681   11277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-616000 minikube.k8s.io/updated_at=2024_12_05T11_11_25_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e85f1467f7b5bf0a3dd477c54f3fe5919d424331 minikube.k8s.io/name=stopped-upgrade-616000 minikube.k8s.io/primary=true
	I1205 11:11:25.874293   11277 kubeadm.go:1113] duration metric: took 42.662542ms to wait for elevateKubeSystemPrivileges
	I1205 11:11:25.874299   11277 ops.go:34] apiserver oom_adj: -16
	I1205 11:11:25.874312   11277 kubeadm.go:394] duration metric: took 4m11.529325333s to StartCluster
	I1205 11:11:25.874323   11277 settings.go:142] acquiring lock: {Name:mk685c3b4b58f394644fceb0edca00785ff86d9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:11:25.874422   11277 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 11:11:25.874874   11277 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20052-8600/kubeconfig: {Name:mkb6577356fc2312bf9b329fd967969d2d30b8a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:11:25.875115   11277 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:11:25.875122   11277 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 11:11:25.875157   11277 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-616000"
	I1205 11:11:25.875169   11277 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-616000"
	W1205 11:11:25.875173   11277 addons.go:243] addon storage-provisioner should already be in state true
	I1205 11:11:25.875184   11277 host.go:66] Checking if "stopped-upgrade-616000" exists ...
	I1205 11:11:25.875203   11277 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-616000"
	I1205 11:11:25.875216   11277 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-616000"
	I1205 11:11:25.875222   11277 config.go:182] Loaded profile config "stopped-upgrade-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1205 11:11:25.878923   11277 out.go:177] * Verifying Kubernetes components...
	I1205 11:11:25.879614   11277 kapi.go:59] client config for stopped-upgrade-616000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/stopped-upgrade-616000/client.key", CAFile:"/Users/jenkins/minikube-integration/20052-8600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1046c7740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 11:11:25.883181   11277 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-616000"
	W1205 11:11:25.883187   11277 addons.go:243] addon default-storageclass should already be in state true
	I1205 11:11:25.883195   11277 host.go:66] Checking if "stopped-upgrade-616000" exists ...
	I1205 11:11:25.883753   11277 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 11:11:25.883758   11277 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 11:11:25.883763   11277 sshutil.go:53] new ssh client: &{IP:localhost Port:51987 SSHKeyPath:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/stopped-upgrade-616000/id_rsa Username:docker}
	I1205 11:11:25.886794   11277 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 11:11:25.890885   11277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 11:11:25.892121   11277 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 11:11:25.892125   11277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 11:11:25.892130   11277 sshutil.go:53] new ssh client: &{IP:localhost Port:51987 SSHKeyPath:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/stopped-upgrade-616000/id_rsa Username:docker}
	I1205 11:11:25.964077   11277 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 11:11:25.969971   11277 api_server.go:52] waiting for apiserver process to appear ...
	I1205 11:11:25.970028   11277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 11:11:25.973766   11277 api_server.go:72] duration metric: took 98.640625ms to wait for apiserver process to appear ...
	I1205 11:11:25.973775   11277 api_server.go:88] waiting for apiserver healthz status ...
	I1205 11:11:25.973783   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:11:26.012619   11277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 11:11:26.021323   11277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 11:11:26.379698   11277 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1205 11:11:26.379711   11277 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1205 11:11:30.975876   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:11:30.975905   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:11:35.976146   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:11:35.976189   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:11:40.976554   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:11:40.976579   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:11:45.976971   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:11:45.976996   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:11:50.977504   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:11:50.977524   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:11:55.978676   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:11:55.978704   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1205 11:11:56.382189   11277 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1205 11:11:56.388444   11277 out.go:177] * Enabled addons: storage-provisioner
	I1205 11:11:56.396443   11277 addons.go:510] duration metric: took 30.521238208s for enable addons: enabled=[storage-provisioner]
	I1205 11:12:00.979720   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:12:00.979741   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:12:05.981059   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:12:05.981079   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:12:10.982256   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:12:10.982303   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:12:15.984197   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:12:15.984229   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:12:20.986481   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:12:20.986533   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:12:25.988736   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:12:25.988843   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:12:26.011881   11277 logs.go:282] 1 containers: [5ff2f80481fc]
	I1205 11:12:26.011958   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:12:26.023175   11277 logs.go:282] 1 containers: [296764cd3f5b]
	I1205 11:12:26.023258   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:12:26.033924   11277 logs.go:282] 2 containers: [e639ce1a7340 1ad49ed9e479]
	I1205 11:12:26.033996   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:12:26.044915   11277 logs.go:282] 1 containers: [b13c6b90b64e]
	I1205 11:12:26.044996   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:12:26.060553   11277 logs.go:282] 1 containers: [50e5bccb9a97]
	I1205 11:12:26.060637   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:12:26.071653   11277 logs.go:282] 1 containers: [c77a45d882b8]
	I1205 11:12:26.071729   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:12:26.086566   11277 logs.go:282] 0 containers: []
	W1205 11:12:26.086576   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:12:26.086642   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:12:26.096853   11277 logs.go:282] 1 containers: [b6fb69ae99c1]
	I1205 11:12:26.096869   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:12:26.096879   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:12:26.132646   11277 logs.go:123] Gathering logs for etcd [296764cd3f5b] ...
	I1205 11:12:26.132657   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296764cd3f5b"
	I1205 11:12:26.148897   11277 logs.go:123] Gathering logs for coredns [e639ce1a7340] ...
	I1205 11:12:26.148907   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e639ce1a7340"
	I1205 11:12:26.160664   11277 logs.go:123] Gathering logs for kube-proxy [50e5bccb9a97] ...
	I1205 11:12:26.160675   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50e5bccb9a97"
	I1205 11:12:26.172812   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:12:26.172823   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:12:26.197716   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:12:26.197727   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:12:26.232345   11277 logs.go:123] Gathering logs for kube-apiserver [5ff2f80481fc] ...
	I1205 11:12:26.232357   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ff2f80481fc"
	I1205 11:12:26.246558   11277 logs.go:123] Gathering logs for coredns [1ad49ed9e479] ...
	I1205 11:12:26.246567   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ad49ed9e479"
	I1205 11:12:26.263665   11277 logs.go:123] Gathering logs for kube-scheduler [b13c6b90b64e] ...
	I1205 11:12:26.263675   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b13c6b90b64e"
	I1205 11:12:26.278438   11277 logs.go:123] Gathering logs for kube-controller-manager [c77a45d882b8] ...
	I1205 11:12:26.278449   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c77a45d882b8"
	I1205 11:12:26.295723   11277 logs.go:123] Gathering logs for storage-provisioner [b6fb69ae99c1] ...
	I1205 11:12:26.295733   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb69ae99c1"
	I1205 11:12:26.307516   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:12:26.307530   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:12:26.321260   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:12:26.321271   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:12:28.827791   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:12:33.830498   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:12:33.830961   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:12:33.863509   11277 logs.go:282] 1 containers: [5ff2f80481fc]
	I1205 11:12:33.863645   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:12:33.882677   11277 logs.go:282] 1 containers: [296764cd3f5b]
	I1205 11:12:33.882790   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:12:33.896710   11277 logs.go:282] 2 containers: [e639ce1a7340 1ad49ed9e479]
	I1205 11:12:33.896801   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:12:33.911834   11277 logs.go:282] 1 containers: [b13c6b90b64e]
	I1205 11:12:33.911909   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:12:33.923060   11277 logs.go:282] 1 containers: [50e5bccb9a97]
	I1205 11:12:33.923137   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:12:33.933935   11277 logs.go:282] 1 containers: [c77a45d882b8]
	I1205 11:12:33.934014   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:12:33.944396   11277 logs.go:282] 0 containers: []
	W1205 11:12:33.944407   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:12:33.944473   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:12:33.955132   11277 logs.go:282] 1 containers: [b6fb69ae99c1]
	I1205 11:12:33.955148   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:12:33.955154   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:12:33.990901   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:12:33.990913   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:12:34.026990   11277 logs.go:123] Gathering logs for coredns [e639ce1a7340] ...
	I1205 11:12:34.027002   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e639ce1a7340"
	I1205 11:12:34.038863   11277 logs.go:123] Gathering logs for coredns [1ad49ed9e479] ...
	I1205 11:12:34.038875   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ad49ed9e479"
	I1205 11:12:34.051084   11277 logs.go:123] Gathering logs for kube-scheduler [b13c6b90b64e] ...
	I1205 11:12:34.051094   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b13c6b90b64e"
	I1205 11:12:34.066619   11277 logs.go:123] Gathering logs for kube-controller-manager [c77a45d882b8] ...
	I1205 11:12:34.066630   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c77a45d882b8"
	I1205 11:12:34.084877   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:12:34.084888   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:12:34.108360   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:12:34.108367   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:12:34.112930   11277 logs.go:123] Gathering logs for kube-apiserver [5ff2f80481fc] ...
	I1205 11:12:34.112938   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ff2f80481fc"
	I1205 11:12:34.128505   11277 logs.go:123] Gathering logs for etcd [296764cd3f5b] ...
	I1205 11:12:34.128515   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296764cd3f5b"
	I1205 11:12:34.148594   11277 logs.go:123] Gathering logs for kube-proxy [50e5bccb9a97] ...
	I1205 11:12:34.148607   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50e5bccb9a97"
	I1205 11:12:34.160814   11277 logs.go:123] Gathering logs for storage-provisioner [b6fb69ae99c1] ...
	I1205 11:12:34.160824   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb69ae99c1"
	I1205 11:12:34.173110   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:12:34.173122   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:12:36.686567   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:12:41.689407   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:12:41.689833   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:12:41.728031   11277 logs.go:282] 1 containers: [5ff2f80481fc]
	I1205 11:12:41.728159   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:12:41.743476   11277 logs.go:282] 1 containers: [296764cd3f5b]
	I1205 11:12:41.743559   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:12:41.756635   11277 logs.go:282] 2 containers: [e639ce1a7340 1ad49ed9e479]
	I1205 11:12:41.756703   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:12:41.767597   11277 logs.go:282] 1 containers: [b13c6b90b64e]
	I1205 11:12:41.767676   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:12:41.778007   11277 logs.go:282] 1 containers: [50e5bccb9a97]
	I1205 11:12:41.778083   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:12:41.788785   11277 logs.go:282] 1 containers: [c77a45d882b8]
	I1205 11:12:41.788888   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:12:41.799214   11277 logs.go:282] 0 containers: []
	W1205 11:12:41.799224   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:12:41.799288   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:12:41.809336   11277 logs.go:282] 1 containers: [b6fb69ae99c1]
	I1205 11:12:41.809351   11277 logs.go:123] Gathering logs for kube-controller-manager [c77a45d882b8] ...
	I1205 11:12:41.809357   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c77a45d882b8"
	I1205 11:12:41.827586   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:12:41.827597   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:12:41.853582   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:12:41.853591   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:12:41.887705   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:12:41.887715   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:12:41.927021   11277 logs.go:123] Gathering logs for kube-apiserver [5ff2f80481fc] ...
	I1205 11:12:41.927033   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ff2f80481fc"
	I1205 11:12:41.941661   11277 logs.go:123] Gathering logs for coredns [1ad49ed9e479] ...
	I1205 11:12:41.941673   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ad49ed9e479"
	I1205 11:12:41.953458   11277 logs.go:123] Gathering logs for kube-proxy [50e5bccb9a97] ...
	I1205 11:12:41.953472   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50e5bccb9a97"
	I1205 11:12:41.964964   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:12:41.964975   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:12:41.976442   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:12:41.976453   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:12:41.981125   11277 logs.go:123] Gathering logs for etcd [296764cd3f5b] ...
	I1205 11:12:41.981133   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296764cd3f5b"
	I1205 11:12:41.994963   11277 logs.go:123] Gathering logs for coredns [e639ce1a7340] ...
	I1205 11:12:41.994977   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e639ce1a7340"
	I1205 11:12:42.006362   11277 logs.go:123] Gathering logs for kube-scheduler [b13c6b90b64e] ...
	I1205 11:12:42.006374   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b13c6b90b64e"
	I1205 11:12:42.021549   11277 logs.go:123] Gathering logs for storage-provisioner [b6fb69ae99c1] ...
	I1205 11:12:42.021558   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb69ae99c1"
	I1205 11:12:44.535046   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:12:49.538084   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:12:49.538571   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:12:49.580790   11277 logs.go:282] 1 containers: [5ff2f80481fc]
	I1205 11:12:49.580929   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:12:49.602153   11277 logs.go:282] 1 containers: [296764cd3f5b]
	I1205 11:12:49.602279   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:12:49.618911   11277 logs.go:282] 2 containers: [e639ce1a7340 1ad49ed9e479]
	I1205 11:12:49.618990   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:12:49.630642   11277 logs.go:282] 1 containers: [b13c6b90b64e]
	I1205 11:12:49.630710   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:12:49.641330   11277 logs.go:282] 1 containers: [50e5bccb9a97]
	I1205 11:12:49.641405   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:12:49.652554   11277 logs.go:282] 1 containers: [c77a45d882b8]
	I1205 11:12:49.652621   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:12:49.664158   11277 logs.go:282] 0 containers: []
	W1205 11:12:49.664172   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:12:49.664238   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:12:49.677994   11277 logs.go:282] 1 containers: [b6fb69ae99c1]
	I1205 11:12:49.678010   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:12:49.678015   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:12:49.689509   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:12:49.689522   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:12:49.724936   11277 logs.go:123] Gathering logs for etcd [296764cd3f5b] ...
	I1205 11:12:49.724943   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296764cd3f5b"
	I1205 11:12:49.739291   11277 logs.go:123] Gathering logs for coredns [e639ce1a7340] ...
	I1205 11:12:49.739302   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e639ce1a7340"
	I1205 11:12:49.750961   11277 logs.go:123] Gathering logs for coredns [1ad49ed9e479] ...
	I1205 11:12:49.750974   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ad49ed9e479"
	I1205 11:12:49.762231   11277 logs.go:123] Gathering logs for kube-scheduler [b13c6b90b64e] ...
	I1205 11:12:49.762244   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b13c6b90b64e"
	I1205 11:12:49.777389   11277 logs.go:123] Gathering logs for kube-proxy [50e5bccb9a97] ...
	I1205 11:12:49.777399   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50e5bccb9a97"
	I1205 11:12:49.789422   11277 logs.go:123] Gathering logs for storage-provisioner [b6fb69ae99c1] ...
	I1205 11:12:49.789434   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb69ae99c1"
	I1205 11:12:49.800963   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:12:49.800973   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:12:49.806879   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:12:49.806891   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:12:49.841954   11277 logs.go:123] Gathering logs for kube-apiserver [5ff2f80481fc] ...
	I1205 11:12:49.841965   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ff2f80481fc"
	I1205 11:12:49.856220   11277 logs.go:123] Gathering logs for kube-controller-manager [c77a45d882b8] ...
	I1205 11:12:49.856232   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c77a45d882b8"
	I1205 11:12:49.881133   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:12:49.881145   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:12:52.406775   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:12:57.408314   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:12:57.408512   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:12:57.422447   11277 logs.go:282] 1 containers: [5ff2f80481fc]
	I1205 11:12:57.422533   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:12:57.433050   11277 logs.go:282] 1 containers: [296764cd3f5b]
	I1205 11:12:57.433122   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:12:57.443559   11277 logs.go:282] 2 containers: [e639ce1a7340 1ad49ed9e479]
	I1205 11:12:57.443635   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:12:57.454165   11277 logs.go:282] 1 containers: [b13c6b90b64e]
	I1205 11:12:57.454236   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:12:57.464527   11277 logs.go:282] 1 containers: [50e5bccb9a97]
	I1205 11:12:57.464598   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:12:57.475292   11277 logs.go:282] 1 containers: [c77a45d882b8]
	I1205 11:12:57.475358   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:12:57.485095   11277 logs.go:282] 0 containers: []
	W1205 11:12:57.485110   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:12:57.485172   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:12:57.495634   11277 logs.go:282] 1 containers: [b6fb69ae99c1]
	I1205 11:12:57.495653   11277 logs.go:123] Gathering logs for storage-provisioner [b6fb69ae99c1] ...
	I1205 11:12:57.495659   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb69ae99c1"
	I1205 11:12:57.511121   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:12:57.511135   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:12:57.535997   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:12:57.536008   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:12:57.570395   11277 logs.go:123] Gathering logs for kube-apiserver [5ff2f80481fc] ...
	I1205 11:12:57.570404   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ff2f80481fc"
	I1205 11:12:57.589614   11277 logs.go:123] Gathering logs for etcd [296764cd3f5b] ...
	I1205 11:12:57.589623   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296764cd3f5b"
	I1205 11:12:57.604725   11277 logs.go:123] Gathering logs for kube-scheduler [b13c6b90b64e] ...
	I1205 11:12:57.604737   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b13c6b90b64e"
	I1205 11:12:57.619949   11277 logs.go:123] Gathering logs for kube-proxy [50e5bccb9a97] ...
	I1205 11:12:57.619959   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50e5bccb9a97"
	I1205 11:12:57.635492   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:12:57.635505   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:12:57.646587   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:12:57.646597   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:12:57.651389   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:12:57.651396   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:12:57.687322   11277 logs.go:123] Gathering logs for coredns [e639ce1a7340] ...
	I1205 11:12:57.687336   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e639ce1a7340"
	I1205 11:12:57.698632   11277 logs.go:123] Gathering logs for coredns [1ad49ed9e479] ...
	I1205 11:12:57.698645   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ad49ed9e479"
	I1205 11:12:57.711033   11277 logs.go:123] Gathering logs for kube-controller-manager [c77a45d882b8] ...
	I1205 11:12:57.711043   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c77a45d882b8"
	I1205 11:13:00.230526   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:13:05.233405   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:13:05.233997   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:13:05.273147   11277 logs.go:282] 1 containers: [5ff2f80481fc]
	I1205 11:13:05.273323   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:13:05.295700   11277 logs.go:282] 1 containers: [296764cd3f5b]
	I1205 11:13:05.295821   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:13:05.310359   11277 logs.go:282] 2 containers: [e639ce1a7340 1ad49ed9e479]
	I1205 11:13:05.310455   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:13:05.322783   11277 logs.go:282] 1 containers: [b13c6b90b64e]
	I1205 11:13:05.322863   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:13:05.333364   11277 logs.go:282] 1 containers: [50e5bccb9a97]
	I1205 11:13:05.333442   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:13:05.343800   11277 logs.go:282] 1 containers: [c77a45d882b8]
	I1205 11:13:05.343880   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:13:05.353951   11277 logs.go:282] 0 containers: []
	W1205 11:13:05.353963   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:13:05.354036   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:13:05.364186   11277 logs.go:282] 1 containers: [b6fb69ae99c1]
	I1205 11:13:05.364201   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:13:05.364208   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:13:05.388610   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:13:05.388619   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:13:05.400334   11277 logs.go:123] Gathering logs for coredns [e639ce1a7340] ...
	I1205 11:13:05.400347   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e639ce1a7340"
	I1205 11:13:05.411629   11277 logs.go:123] Gathering logs for coredns [1ad49ed9e479] ...
	I1205 11:13:05.411642   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ad49ed9e479"
	I1205 11:13:05.422923   11277 logs.go:123] Gathering logs for kube-controller-manager [c77a45d882b8] ...
	I1205 11:13:05.422937   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c77a45d882b8"
	I1205 11:13:05.440756   11277 logs.go:123] Gathering logs for kube-apiserver [5ff2f80481fc] ...
	I1205 11:13:05.440766   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ff2f80481fc"
	I1205 11:13:05.454868   11277 logs.go:123] Gathering logs for etcd [296764cd3f5b] ...
	I1205 11:13:05.454880   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296764cd3f5b"
	I1205 11:13:05.468407   11277 logs.go:123] Gathering logs for kube-scheduler [b13c6b90b64e] ...
	I1205 11:13:05.468416   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b13c6b90b64e"
	I1205 11:13:05.483359   11277 logs.go:123] Gathering logs for kube-proxy [50e5bccb9a97] ...
	I1205 11:13:05.483372   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50e5bccb9a97"
	I1205 11:13:05.494976   11277 logs.go:123] Gathering logs for storage-provisioner [b6fb69ae99c1] ...
	I1205 11:13:05.494989   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb69ae99c1"
	I1205 11:13:05.505982   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:13:05.505994   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:13:05.540350   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:13:05.540360   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:13:05.545144   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:13:05.545150   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:13:08.081133   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:13:13.083709   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:13:13.084239   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:13:13.125324   11277 logs.go:282] 1 containers: [5ff2f80481fc]
	I1205 11:13:13.125474   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:13:13.153880   11277 logs.go:282] 1 containers: [296764cd3f5b]
	I1205 11:13:13.153990   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:13:13.168017   11277 logs.go:282] 2 containers: [e639ce1a7340 1ad49ed9e479]
	I1205 11:13:13.168106   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:13:13.179719   11277 logs.go:282] 1 containers: [b13c6b90b64e]
	I1205 11:13:13.179793   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:13:13.190259   11277 logs.go:282] 1 containers: [50e5bccb9a97]
	I1205 11:13:13.190342   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:13:13.200707   11277 logs.go:282] 1 containers: [c77a45d882b8]
	I1205 11:13:13.200785   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:13:13.210584   11277 logs.go:282] 0 containers: []
	W1205 11:13:13.210595   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:13:13.210655   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:13:13.220900   11277 logs.go:282] 1 containers: [b6fb69ae99c1]
	I1205 11:13:13.220916   11277 logs.go:123] Gathering logs for etcd [296764cd3f5b] ...
	I1205 11:13:13.220920   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296764cd3f5b"
	I1205 11:13:13.235011   11277 logs.go:123] Gathering logs for kube-proxy [50e5bccb9a97] ...
	I1205 11:13:13.235025   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50e5bccb9a97"
	I1205 11:13:13.247167   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:13:13.247180   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:13:13.270507   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:13:13.270514   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:13:13.283199   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:13:13.283210   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:13:13.321200   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:13:13.321212   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:13:13.326282   11277 logs.go:123] Gathering logs for kube-apiserver [5ff2f80481fc] ...
	I1205 11:13:13.326290   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ff2f80481fc"
	I1205 11:13:13.340603   11277 logs.go:123] Gathering logs for coredns [e639ce1a7340] ...
	I1205 11:13:13.340616   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e639ce1a7340"
	I1205 11:13:13.352570   11277 logs.go:123] Gathering logs for coredns [1ad49ed9e479] ...
	I1205 11:13:13.352582   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ad49ed9e479"
	I1205 11:13:13.363770   11277 logs.go:123] Gathering logs for kube-scheduler [b13c6b90b64e] ...
	I1205 11:13:13.363783   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b13c6b90b64e"
	I1205 11:13:13.391857   11277 logs.go:123] Gathering logs for kube-controller-manager [c77a45d882b8] ...
	I1205 11:13:13.391869   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c77a45d882b8"
	I1205 11:13:13.415086   11277 logs.go:123] Gathering logs for storage-provisioner [b6fb69ae99c1] ...
	I1205 11:13:13.415098   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb69ae99c1"
	I1205 11:13:13.426186   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:13:13.426198   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:13:15.961207   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:13:20.963666   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:13:20.963945   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:13:20.986466   11277 logs.go:282] 1 containers: [5ff2f80481fc]
	I1205 11:13:20.986599   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:13:21.001114   11277 logs.go:282] 1 containers: [296764cd3f5b]
	I1205 11:13:21.001207   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:13:21.013591   11277 logs.go:282] 2 containers: [e639ce1a7340 1ad49ed9e479]
	I1205 11:13:21.013672   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:13:21.028853   11277 logs.go:282] 1 containers: [b13c6b90b64e]
	I1205 11:13:21.028928   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:13:21.041047   11277 logs.go:282] 1 containers: [50e5bccb9a97]
	I1205 11:13:21.041126   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:13:21.051495   11277 logs.go:282] 1 containers: [c77a45d882b8]
	I1205 11:13:21.051573   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:13:21.061233   11277 logs.go:282] 0 containers: []
	W1205 11:13:21.061247   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:13:21.061322   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:13:21.072100   11277 logs.go:282] 1 containers: [b6fb69ae99c1]
	I1205 11:13:21.072114   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:13:21.072123   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:13:21.085438   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:13:21.085453   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:13:21.089988   11277 logs.go:123] Gathering logs for kube-apiserver [5ff2f80481fc] ...
	I1205 11:13:21.089996   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ff2f80481fc"
	I1205 11:13:21.104516   11277 logs.go:123] Gathering logs for etcd [296764cd3f5b] ...
	I1205 11:13:21.104527   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296764cd3f5b"
	I1205 11:13:21.118078   11277 logs.go:123] Gathering logs for kube-controller-manager [c77a45d882b8] ...
	I1205 11:13:21.118091   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c77a45d882b8"
	I1205 11:13:21.135987   11277 logs.go:123] Gathering logs for storage-provisioner [b6fb69ae99c1] ...
	I1205 11:13:21.136001   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb69ae99c1"
	I1205 11:13:21.148505   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:13:21.148516   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:13:21.174316   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:13:21.174323   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:13:21.207889   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:13:21.207897   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:13:21.243986   11277 logs.go:123] Gathering logs for coredns [e639ce1a7340] ...
	I1205 11:13:21.244000   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e639ce1a7340"
	I1205 11:13:21.255647   11277 logs.go:123] Gathering logs for coredns [1ad49ed9e479] ...
	I1205 11:13:21.255662   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ad49ed9e479"
	I1205 11:13:21.267180   11277 logs.go:123] Gathering logs for kube-scheduler [b13c6b90b64e] ...
	I1205 11:13:21.267193   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b13c6b90b64e"
	I1205 11:13:21.282202   11277 logs.go:123] Gathering logs for kube-proxy [50e5bccb9a97] ...
	I1205 11:13:21.282211   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50e5bccb9a97"
	I1205 11:13:23.795373   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:13:28.797775   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:13:28.798080   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:13:28.824233   11277 logs.go:282] 1 containers: [5ff2f80481fc]
	I1205 11:13:28.824350   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:13:28.838915   11277 logs.go:282] 1 containers: [296764cd3f5b]
	I1205 11:13:28.838989   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:13:28.850410   11277 logs.go:282] 2 containers: [e639ce1a7340 1ad49ed9e479]
	I1205 11:13:28.850488   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:13:28.864405   11277 logs.go:282] 1 containers: [b13c6b90b64e]
	I1205 11:13:28.864481   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:13:28.875556   11277 logs.go:282] 1 containers: [50e5bccb9a97]
	I1205 11:13:28.875632   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:13:28.886379   11277 logs.go:282] 1 containers: [c77a45d882b8]
	I1205 11:13:28.886463   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:13:28.896119   11277 logs.go:282] 0 containers: []
	W1205 11:13:28.896130   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:13:28.896191   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:13:28.906309   11277 logs.go:282] 1 containers: [b6fb69ae99c1]
	I1205 11:13:28.906324   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:13:28.906330   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:13:28.910568   11277 logs.go:123] Gathering logs for etcd [296764cd3f5b] ...
	I1205 11:13:28.910574   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296764cd3f5b"
	I1205 11:13:28.924098   11277 logs.go:123] Gathering logs for coredns [1ad49ed9e479] ...
	I1205 11:13:28.924107   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ad49ed9e479"
	I1205 11:13:28.935747   11277 logs.go:123] Gathering logs for kube-scheduler [b13c6b90b64e] ...
	I1205 11:13:28.935760   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b13c6b90b64e"
	I1205 11:13:28.950998   11277 logs.go:123] Gathering logs for kube-proxy [50e5bccb9a97] ...
	I1205 11:13:28.951010   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50e5bccb9a97"
	I1205 11:13:28.971741   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:13:28.971754   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:13:29.005895   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:13:29.005905   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:13:29.040737   11277 logs.go:123] Gathering logs for kube-apiserver [5ff2f80481fc] ...
	I1205 11:13:29.040751   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ff2f80481fc"
	I1205 11:13:29.055026   11277 logs.go:123] Gathering logs for coredns [e639ce1a7340] ...
	I1205 11:13:29.055039   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e639ce1a7340"
	I1205 11:13:29.066720   11277 logs.go:123] Gathering logs for kube-controller-manager [c77a45d882b8] ...
	I1205 11:13:29.066731   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c77a45d882b8"
	I1205 11:13:29.084111   11277 logs.go:123] Gathering logs for storage-provisioner [b6fb69ae99c1] ...
	I1205 11:13:29.084122   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb69ae99c1"
	I1205 11:13:29.100122   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:13:29.100134   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:13:29.123313   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:13:29.123321   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:13:31.636098   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:13:36.638843   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:13:36.639415   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:13:36.677732   11277 logs.go:282] 1 containers: [5ff2f80481fc]
	I1205 11:13:36.677885   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:13:36.699626   11277 logs.go:282] 1 containers: [296764cd3f5b]
	I1205 11:13:36.699763   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:13:36.715328   11277 logs.go:282] 2 containers: [e639ce1a7340 1ad49ed9e479]
	I1205 11:13:36.715419   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:13:36.732599   11277 logs.go:282] 1 containers: [b13c6b90b64e]
	I1205 11:13:36.732684   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:13:36.747924   11277 logs.go:282] 1 containers: [50e5bccb9a97]
	I1205 11:13:36.748000   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:13:36.758520   11277 logs.go:282] 1 containers: [c77a45d882b8]
	I1205 11:13:36.758596   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:13:36.768946   11277 logs.go:282] 0 containers: []
	W1205 11:13:36.768959   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:13:36.769028   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:13:36.779830   11277 logs.go:282] 1 containers: [b6fb69ae99c1]
	I1205 11:13:36.779849   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:13:36.779856   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:13:36.803678   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:13:36.803689   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:13:36.807851   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:13:36.807859   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:13:36.845913   11277 logs.go:123] Gathering logs for etcd [296764cd3f5b] ...
	I1205 11:13:36.845924   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296764cd3f5b"
	I1205 11:13:36.860151   11277 logs.go:123] Gathering logs for coredns [1ad49ed9e479] ...
	I1205 11:13:36.860165   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ad49ed9e479"
	I1205 11:13:36.871696   11277 logs.go:123] Gathering logs for kube-controller-manager [c77a45d882b8] ...
	I1205 11:13:36.871709   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c77a45d882b8"
	I1205 11:13:36.889721   11277 logs.go:123] Gathering logs for storage-provisioner [b6fb69ae99c1] ...
	I1205 11:13:36.889731   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb69ae99c1"
	I1205 11:13:36.909765   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:13:36.909782   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:13:36.949173   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:13:36.949188   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:13:36.985493   11277 logs.go:123] Gathering logs for kube-apiserver [5ff2f80481fc] ...
	I1205 11:13:36.985513   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ff2f80481fc"
	I1205 11:13:37.022718   11277 logs.go:123] Gathering logs for coredns [e639ce1a7340] ...
	I1205 11:13:37.022737   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e639ce1a7340"
	I1205 11:13:37.044901   11277 logs.go:123] Gathering logs for kube-scheduler [b13c6b90b64e] ...
	I1205 11:13:37.044916   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b13c6b90b64e"
	I1205 11:13:37.062591   11277 logs.go:123] Gathering logs for kube-proxy [50e5bccb9a97] ...
	I1205 11:13:37.062604   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50e5bccb9a97"
	I1205 11:13:39.576078   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:13:44.578943   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:13:44.579431   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:13:44.615818   11277 logs.go:282] 1 containers: [5ff2f80481fc]
	I1205 11:13:44.615978   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:13:44.636107   11277 logs.go:282] 1 containers: [296764cd3f5b]
	I1205 11:13:44.636223   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:13:44.651120   11277 logs.go:282] 4 containers: [2485dd829d80 e5907619273d e639ce1a7340 1ad49ed9e479]
	I1205 11:13:44.651206   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:13:44.670094   11277 logs.go:282] 1 containers: [b13c6b90b64e]
	I1205 11:13:44.670171   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:13:44.680912   11277 logs.go:282] 1 containers: [50e5bccb9a97]
	I1205 11:13:44.680989   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:13:44.691372   11277 logs.go:282] 1 containers: [c77a45d882b8]
	I1205 11:13:44.691443   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:13:44.701231   11277 logs.go:282] 0 containers: []
	W1205 11:13:44.701242   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:13:44.701311   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:13:44.712526   11277 logs.go:282] 1 containers: [b6fb69ae99c1]
	I1205 11:13:44.712543   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:13:44.712550   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:13:44.717364   11277 logs.go:123] Gathering logs for coredns [1ad49ed9e479] ...
	I1205 11:13:44.717372   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ad49ed9e479"
	I1205 11:13:44.729996   11277 logs.go:123] Gathering logs for kube-proxy [50e5bccb9a97] ...
	I1205 11:13:44.730009   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50e5bccb9a97"
	I1205 11:13:44.742137   11277 logs.go:123] Gathering logs for kube-controller-manager [c77a45d882b8] ...
	I1205 11:13:44.742147   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c77a45d882b8"
	I1205 11:13:44.760553   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:13:44.760566   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:13:44.793601   11277 logs.go:123] Gathering logs for coredns [2485dd829d80] ...
	I1205 11:13:44.793608   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2485dd829d80"
	I1205 11:13:44.809504   11277 logs.go:123] Gathering logs for coredns [e639ce1a7340] ...
	I1205 11:13:44.809516   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e639ce1a7340"
	I1205 11:13:44.821875   11277 logs.go:123] Gathering logs for storage-provisioner [b6fb69ae99c1] ...
	I1205 11:13:44.821886   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb69ae99c1"
	I1205 11:13:44.833693   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:13:44.833707   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:13:44.845394   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:13:44.845409   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:13:44.871066   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:13:44.871073   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:13:44.906943   11277 logs.go:123] Gathering logs for kube-apiserver [5ff2f80481fc] ...
	I1205 11:13:44.906955   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ff2f80481fc"
	I1205 11:13:44.921765   11277 logs.go:123] Gathering logs for etcd [296764cd3f5b] ...
	I1205 11:13:44.921775   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296764cd3f5b"
	I1205 11:13:44.935941   11277 logs.go:123] Gathering logs for coredns [e5907619273d] ...
	I1205 11:13:44.935953   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5907619273d"
	I1205 11:13:44.947668   11277 logs.go:123] Gathering logs for kube-scheduler [b13c6b90b64e] ...
	I1205 11:13:44.947683   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b13c6b90b64e"
	I1205 11:13:47.464778   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:13:52.467110   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:13:52.467562   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:13:52.499365   11277 logs.go:282] 1 containers: [5ff2f80481fc]
	I1205 11:13:52.499504   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:13:52.519644   11277 logs.go:282] 1 containers: [296764cd3f5b]
	I1205 11:13:52.519742   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:13:52.533311   11277 logs.go:282] 4 containers: [2485dd829d80 e5907619273d e639ce1a7340 1ad49ed9e479]
	I1205 11:13:52.533399   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:13:52.545022   11277 logs.go:282] 1 containers: [b13c6b90b64e]
	I1205 11:13:52.545100   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:13:52.556075   11277 logs.go:282] 1 containers: [50e5bccb9a97]
	I1205 11:13:52.556149   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:13:52.566759   11277 logs.go:282] 1 containers: [c77a45d882b8]
	I1205 11:13:52.566835   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:13:52.576727   11277 logs.go:282] 0 containers: []
	W1205 11:13:52.576738   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:13:52.576811   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:13:52.587062   11277 logs.go:282] 1 containers: [b6fb69ae99c1]
	I1205 11:13:52.587078   11277 logs.go:123] Gathering logs for etcd [296764cd3f5b] ...
	I1205 11:13:52.587083   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296764cd3f5b"
	I1205 11:13:52.600966   11277 logs.go:123] Gathering logs for coredns [2485dd829d80] ...
	I1205 11:13:52.600979   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2485dd829d80"
	I1205 11:13:52.612247   11277 logs.go:123] Gathering logs for coredns [1ad49ed9e479] ...
	I1205 11:13:52.612261   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ad49ed9e479"
	I1205 11:13:52.623938   11277 logs.go:123] Gathering logs for kube-controller-manager [c77a45d882b8] ...
	I1205 11:13:52.623951   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c77a45d882b8"
	I1205 11:13:52.646997   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:13:52.647007   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:13:52.670581   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:13:52.670588   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:13:52.704610   11277 logs.go:123] Gathering logs for kube-apiserver [5ff2f80481fc] ...
	I1205 11:13:52.704623   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ff2f80481fc"
	I1205 11:13:52.719043   11277 logs.go:123] Gathering logs for kube-proxy [50e5bccb9a97] ...
	I1205 11:13:52.719054   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50e5bccb9a97"
	I1205 11:13:52.730683   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:13:52.730693   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:13:52.734831   11277 logs.go:123] Gathering logs for coredns [e639ce1a7340] ...
	I1205 11:13:52.734839   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e639ce1a7340"
	I1205 11:13:52.751798   11277 logs.go:123] Gathering logs for storage-provisioner [b6fb69ae99c1] ...
	I1205 11:13:52.751808   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb69ae99c1"
	I1205 11:13:52.763140   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:13:52.763152   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:13:52.798066   11277 logs.go:123] Gathering logs for coredns [e5907619273d] ...
	I1205 11:13:52.798077   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5907619273d"
	I1205 11:13:52.813615   11277 logs.go:123] Gathering logs for kube-scheduler [b13c6b90b64e] ...
	I1205 11:13:52.813628   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b13c6b90b64e"
	I1205 11:13:52.829882   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:13:52.829895   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:13:55.343591   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:14:00.346596   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:14:00.347073   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:14:00.380473   11277 logs.go:282] 1 containers: [5ff2f80481fc]
	I1205 11:14:00.380640   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:14:00.400788   11277 logs.go:282] 1 containers: [296764cd3f5b]
	I1205 11:14:00.400903   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:14:00.415512   11277 logs.go:282] 4 containers: [2485dd829d80 e5907619273d e639ce1a7340 1ad49ed9e479]
	I1205 11:14:00.415617   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:14:00.427561   11277 logs.go:282] 1 containers: [b13c6b90b64e]
	I1205 11:14:00.427632   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:14:00.440392   11277 logs.go:282] 1 containers: [50e5bccb9a97]
	I1205 11:14:00.440459   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:14:00.451035   11277 logs.go:282] 1 containers: [c77a45d882b8]
	I1205 11:14:00.451099   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:14:00.461672   11277 logs.go:282] 0 containers: []
	W1205 11:14:00.461683   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:14:00.461735   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:14:00.472111   11277 logs.go:282] 1 containers: [b6fb69ae99c1]
	I1205 11:14:00.472136   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:14:00.472142   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:14:00.507450   11277 logs.go:123] Gathering logs for etcd [296764cd3f5b] ...
	I1205 11:14:00.507464   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296764cd3f5b"
	I1205 11:14:00.525665   11277 logs.go:123] Gathering logs for kube-proxy [50e5bccb9a97] ...
	I1205 11:14:00.525675   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50e5bccb9a97"
	I1205 11:14:00.537795   11277 logs.go:123] Gathering logs for storage-provisioner [b6fb69ae99c1] ...
	I1205 11:14:00.537803   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb69ae99c1"
	I1205 11:14:00.549961   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:14:00.549971   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:14:00.561709   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:14:00.561726   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:14:00.596761   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:14:00.596769   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:14:00.601559   11277 logs.go:123] Gathering logs for coredns [2485dd829d80] ...
	I1205 11:14:00.601566   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2485dd829d80"
	I1205 11:14:00.615382   11277 logs.go:123] Gathering logs for coredns [e5907619273d] ...
	I1205 11:14:00.615395   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5907619273d"
	I1205 11:14:00.626917   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:14:00.626928   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:14:00.650446   11277 logs.go:123] Gathering logs for coredns [1ad49ed9e479] ...
	I1205 11:14:00.650455   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ad49ed9e479"
	I1205 11:14:00.662228   11277 logs.go:123] Gathering logs for kube-controller-manager [c77a45d882b8] ...
	I1205 11:14:00.662241   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c77a45d882b8"
	I1205 11:14:00.679560   11277 logs.go:123] Gathering logs for kube-apiserver [5ff2f80481fc] ...
	I1205 11:14:00.679569   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ff2f80481fc"
	I1205 11:14:00.693996   11277 logs.go:123] Gathering logs for coredns [e639ce1a7340] ...
	I1205 11:14:00.694007   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e639ce1a7340"
	I1205 11:14:00.705872   11277 logs.go:123] Gathering logs for kube-scheduler [b13c6b90b64e] ...
	I1205 11:14:00.705884   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b13c6b90b64e"
	I1205 11:14:03.230194   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:14:08.232630   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:14:08.232931   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:14:08.255333   11277 logs.go:282] 1 containers: [5ff2f80481fc]
	I1205 11:14:08.255462   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:14:08.271625   11277 logs.go:282] 1 containers: [296764cd3f5b]
	I1205 11:14:08.271716   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:14:08.284582   11277 logs.go:282] 4 containers: [2485dd829d80 e5907619273d e639ce1a7340 1ad49ed9e479]
	I1205 11:14:08.284665   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:14:08.295858   11277 logs.go:282] 1 containers: [b13c6b90b64e]
	I1205 11:14:08.295938   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:14:08.306700   11277 logs.go:282] 1 containers: [50e5bccb9a97]
	I1205 11:14:08.306772   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:14:08.317218   11277 logs.go:282] 1 containers: [c77a45d882b8]
	I1205 11:14:08.317295   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:14:08.328649   11277 logs.go:282] 0 containers: []
	W1205 11:14:08.328661   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:14:08.328726   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:14:08.339125   11277 logs.go:282] 1 containers: [b6fb69ae99c1]
	I1205 11:14:08.339146   11277 logs.go:123] Gathering logs for coredns [e639ce1a7340] ...
	I1205 11:14:08.339152   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e639ce1a7340"
	I1205 11:14:08.353151   11277 logs.go:123] Gathering logs for kube-scheduler [b13c6b90b64e] ...
	I1205 11:14:08.353163   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b13c6b90b64e"
	I1205 11:14:08.368513   11277 logs.go:123] Gathering logs for kube-apiserver [5ff2f80481fc] ...
	I1205 11:14:08.368527   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ff2f80481fc"
	I1205 11:14:08.382184   11277 logs.go:123] Gathering logs for etcd [296764cd3f5b] ...
	I1205 11:14:08.382193   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296764cd3f5b"
	I1205 11:14:08.396108   11277 logs.go:123] Gathering logs for coredns [e5907619273d] ...
	I1205 11:14:08.396118   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5907619273d"
	I1205 11:14:08.407427   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:14:08.407436   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:14:08.411769   11277 logs.go:123] Gathering logs for coredns [1ad49ed9e479] ...
	I1205 11:14:08.411777   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ad49ed9e479"
	I1205 11:14:08.425668   11277 logs.go:123] Gathering logs for storage-provisioner [b6fb69ae99c1] ...
	I1205 11:14:08.425678   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb69ae99c1"
	I1205 11:14:08.437130   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:14:08.437144   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:14:08.448738   11277 logs.go:123] Gathering logs for kube-proxy [50e5bccb9a97] ...
	I1205 11:14:08.448750   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50e5bccb9a97"
	I1205 11:14:08.460085   11277 logs.go:123] Gathering logs for kube-controller-manager [c77a45d882b8] ...
	I1205 11:14:08.460095   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c77a45d882b8"
	I1205 11:14:08.478979   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:14:08.478989   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:14:08.503883   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:14:08.503894   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:14:08.537522   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:14:08.537532   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:14:08.573288   11277 logs.go:123] Gathering logs for coredns [2485dd829d80] ...
	I1205 11:14:08.573298   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2485dd829d80"
	I1205 11:14:11.086917   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:14:16.088313   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:14:16.088555   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:14:16.100179   11277 logs.go:282] 1 containers: [5ff2f80481fc]
	I1205 11:14:16.100258   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:14:16.110665   11277 logs.go:282] 1 containers: [296764cd3f5b]
	I1205 11:14:16.110738   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:14:16.121228   11277 logs.go:282] 4 containers: [2485dd829d80 e5907619273d e639ce1a7340 1ad49ed9e479]
	I1205 11:14:16.121305   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:14:16.133487   11277 logs.go:282] 1 containers: [b13c6b90b64e]
	I1205 11:14:16.133564   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:14:16.147902   11277 logs.go:282] 1 containers: [50e5bccb9a97]
	I1205 11:14:16.147981   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:14:16.163269   11277 logs.go:282] 1 containers: [c77a45d882b8]
	I1205 11:14:16.163348   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:14:16.176734   11277 logs.go:282] 0 containers: []
	W1205 11:14:16.176745   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:14:16.176814   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:14:16.187201   11277 logs.go:282] 1 containers: [b6fb69ae99c1]
	I1205 11:14:16.187219   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:14:16.187225   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:14:16.221663   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:14:16.221674   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:14:16.229981   11277 logs.go:123] Gathering logs for kube-apiserver [5ff2f80481fc] ...
	I1205 11:14:16.229992   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ff2f80481fc"
	I1205 11:14:16.244340   11277 logs.go:123] Gathering logs for storage-provisioner [b6fb69ae99c1] ...
	I1205 11:14:16.244351   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb69ae99c1"
	I1205 11:14:16.261705   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:14:16.261718   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:14:16.286730   11277 logs.go:123] Gathering logs for coredns [e5907619273d] ...
	I1205 11:14:16.286737   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5907619273d"
	I1205 11:14:16.298503   11277 logs.go:123] Gathering logs for coredns [1ad49ed9e479] ...
	I1205 11:14:16.298516   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ad49ed9e479"
	I1205 11:14:16.310366   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:14:16.310378   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:14:16.345994   11277 logs.go:123] Gathering logs for etcd [296764cd3f5b] ...
	I1205 11:14:16.346006   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296764cd3f5b"
	I1205 11:14:16.360363   11277 logs.go:123] Gathering logs for coredns [2485dd829d80] ...
	I1205 11:14:16.360376   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2485dd829d80"
	I1205 11:14:16.371986   11277 logs.go:123] Gathering logs for kube-scheduler [b13c6b90b64e] ...
	I1205 11:14:16.371995   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b13c6b90b64e"
	I1205 11:14:16.386600   11277 logs.go:123] Gathering logs for kube-proxy [50e5bccb9a97] ...
	I1205 11:14:16.386612   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50e5bccb9a97"
	I1205 11:14:16.398333   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:14:16.398346   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:14:16.410062   11277 logs.go:123] Gathering logs for coredns [e639ce1a7340] ...
	I1205 11:14:16.410074   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e639ce1a7340"
	I1205 11:14:16.421633   11277 logs.go:123] Gathering logs for kube-controller-manager [c77a45d882b8] ...
	I1205 11:14:16.421646   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c77a45d882b8"
	I1205 11:14:18.941229   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:14:23.943557   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:14:23.943642   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:14:23.967682   11277 logs.go:282] 1 containers: [5ff2f80481fc]
	I1205 11:14:23.967758   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:14:23.983402   11277 logs.go:282] 1 containers: [296764cd3f5b]
	I1205 11:14:23.983471   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:14:23.997917   11277 logs.go:282] 4 containers: [2485dd829d80 e5907619273d e639ce1a7340 1ad49ed9e479]
	I1205 11:14:23.997973   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:14:24.008377   11277 logs.go:282] 1 containers: [b13c6b90b64e]
	I1205 11:14:24.008499   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:14:24.019992   11277 logs.go:282] 1 containers: [50e5bccb9a97]
	I1205 11:14:24.020053   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:14:24.032785   11277 logs.go:282] 1 containers: [c77a45d882b8]
	I1205 11:14:24.032867   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:14:24.045095   11277 logs.go:282] 0 containers: []
	W1205 11:14:24.045105   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:14:24.045153   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:14:24.055753   11277 logs.go:282] 1 containers: [b6fb69ae99c1]
	I1205 11:14:24.055770   11277 logs.go:123] Gathering logs for coredns [1ad49ed9e479] ...
	I1205 11:14:24.055776   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ad49ed9e479"
	I1205 11:14:24.068676   11277 logs.go:123] Gathering logs for kube-scheduler [b13c6b90b64e] ...
	I1205 11:14:24.068690   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b13c6b90b64e"
	I1205 11:14:24.088749   11277 logs.go:123] Gathering logs for kube-proxy [50e5bccb9a97] ...
	I1205 11:14:24.088765   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50e5bccb9a97"
	I1205 11:14:24.101352   11277 logs.go:123] Gathering logs for storage-provisioner [b6fb69ae99c1] ...
	I1205 11:14:24.101362   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb69ae99c1"
	I1205 11:14:24.115700   11277 logs.go:123] Gathering logs for kube-apiserver [5ff2f80481fc] ...
	I1205 11:14:24.115713   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ff2f80481fc"
	I1205 11:14:24.132617   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:14:24.132636   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:14:24.138117   11277 logs.go:123] Gathering logs for etcd [296764cd3f5b] ...
	I1205 11:14:24.138128   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296764cd3f5b"
	I1205 11:14:24.155855   11277 logs.go:123] Gathering logs for coredns [e5907619273d] ...
	I1205 11:14:24.155866   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5907619273d"
	I1205 11:14:24.172056   11277 logs.go:123] Gathering logs for kube-controller-manager [c77a45d882b8] ...
	I1205 11:14:24.172067   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c77a45d882b8"
	I1205 11:14:24.190328   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:14:24.190342   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:14:24.223682   11277 logs.go:123] Gathering logs for coredns [e639ce1a7340] ...
	I1205 11:14:24.223692   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e639ce1a7340"
	I1205 11:14:24.236123   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:14:24.236136   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:14:24.261754   11277 logs.go:123] Gathering logs for coredns [2485dd829d80] ...
	I1205 11:14:24.261769   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2485dd829d80"
	I1205 11:14:24.276021   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:14:24.276031   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:14:24.288401   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:14:24.288413   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:14:26.829816   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:14:31.832601   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:14:31.832727   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:14:31.849333   11277 logs.go:282] 1 containers: [5ff2f80481fc]
	I1205 11:14:31.849417   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:14:31.862694   11277 logs.go:282] 1 containers: [296764cd3f5b]
	I1205 11:14:31.862775   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:14:31.875539   11277 logs.go:282] 4 containers: [2485dd829d80 e5907619273d e639ce1a7340 1ad49ed9e479]
	I1205 11:14:31.875633   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:14:31.887856   11277 logs.go:282] 1 containers: [b13c6b90b64e]
	I1205 11:14:31.887941   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:14:31.899044   11277 logs.go:282] 1 containers: [50e5bccb9a97]
	I1205 11:14:31.899117   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:14:31.909513   11277 logs.go:282] 1 containers: [c77a45d882b8]
	I1205 11:14:31.909579   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:14:31.919904   11277 logs.go:282] 0 containers: []
	W1205 11:14:31.919913   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:14:31.919974   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:14:31.930416   11277 logs.go:282] 1 containers: [b6fb69ae99c1]
	I1205 11:14:31.930434   11277 logs.go:123] Gathering logs for kube-proxy [50e5bccb9a97] ...
	I1205 11:14:31.930439   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50e5bccb9a97"
	I1205 11:14:31.943216   11277 logs.go:123] Gathering logs for kube-controller-manager [c77a45d882b8] ...
	I1205 11:14:31.943230   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c77a45d882b8"
	I1205 11:14:31.964623   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:14:31.964634   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:14:31.989651   11277 logs.go:123] Gathering logs for coredns [2485dd829d80] ...
	I1205 11:14:31.989657   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2485dd829d80"
	I1205 11:14:32.000749   11277 logs.go:123] Gathering logs for coredns [e5907619273d] ...
	I1205 11:14:32.000760   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5907619273d"
	I1205 11:14:32.012145   11277 logs.go:123] Gathering logs for coredns [1ad49ed9e479] ...
	I1205 11:14:32.012154   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ad49ed9e479"
	I1205 11:14:32.030393   11277 logs.go:123] Gathering logs for coredns [e639ce1a7340] ...
	I1205 11:14:32.030404   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e639ce1a7340"
	I1205 11:14:32.041880   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:14:32.041892   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:14:32.061068   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:14:32.061080   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:14:32.094030   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:14:32.094042   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:14:32.129544   11277 logs.go:123] Gathering logs for kube-apiserver [5ff2f80481fc] ...
	I1205 11:14:32.129555   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ff2f80481fc"
	I1205 11:14:32.143701   11277 logs.go:123] Gathering logs for storage-provisioner [b6fb69ae99c1] ...
	I1205 11:14:32.143713   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb69ae99c1"
	I1205 11:14:32.155317   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:14:32.155328   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:14:32.159643   11277 logs.go:123] Gathering logs for etcd [296764cd3f5b] ...
	I1205 11:14:32.159651   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296764cd3f5b"
	I1205 11:14:32.173873   11277 logs.go:123] Gathering logs for kube-scheduler [b13c6b90b64e] ...
	I1205 11:14:32.173882   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b13c6b90b64e"
	I1205 11:14:34.690771   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:14:39.692889   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:14:39.693074   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:14:39.709368   11277 logs.go:282] 1 containers: [5ff2f80481fc]
	I1205 11:14:39.709469   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:14:39.731893   11277 logs.go:282] 1 containers: [296764cd3f5b]
	I1205 11:14:39.731974   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:14:39.743173   11277 logs.go:282] 4 containers: [2485dd829d80 e5907619273d e639ce1a7340 1ad49ed9e479]
	I1205 11:14:39.743257   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:14:39.758730   11277 logs.go:282] 1 containers: [b13c6b90b64e]
	I1205 11:14:39.758806   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:14:39.769783   11277 logs.go:282] 1 containers: [50e5bccb9a97]
	I1205 11:14:39.769851   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:14:39.780841   11277 logs.go:282] 1 containers: [c77a45d882b8]
	I1205 11:14:39.780925   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:14:39.791169   11277 logs.go:282] 0 containers: []
	W1205 11:14:39.791178   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:14:39.791271   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:14:39.801713   11277 logs.go:282] 1 containers: [b6fb69ae99c1]
	I1205 11:14:39.801732   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:14:39.801737   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:14:39.836992   11277 logs.go:123] Gathering logs for coredns [e639ce1a7340] ...
	I1205 11:14:39.837004   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e639ce1a7340"
	I1205 11:14:39.850056   11277 logs.go:123] Gathering logs for coredns [1ad49ed9e479] ...
	I1205 11:14:39.850071   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ad49ed9e479"
	I1205 11:14:39.865754   11277 logs.go:123] Gathering logs for kube-scheduler [b13c6b90b64e] ...
	I1205 11:14:39.865767   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b13c6b90b64e"
	I1205 11:14:39.880854   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:14:39.880864   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:14:39.905414   11277 logs.go:123] Gathering logs for kube-proxy [50e5bccb9a97] ...
	I1205 11:14:39.905421   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50e5bccb9a97"
	I1205 11:14:39.917281   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:14:39.917291   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:14:39.952012   11277 logs.go:123] Gathering logs for kube-controller-manager [c77a45d882b8] ...
	I1205 11:14:39.952023   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c77a45d882b8"
	I1205 11:14:39.969589   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:14:39.969598   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:14:39.981423   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:14:39.981434   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:14:39.985529   11277 logs.go:123] Gathering logs for kube-apiserver [5ff2f80481fc] ...
	I1205 11:14:39.985537   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ff2f80481fc"
	I1205 11:14:39.999569   11277 logs.go:123] Gathering logs for etcd [296764cd3f5b] ...
	I1205 11:14:39.999582   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296764cd3f5b"
	I1205 11:14:40.013282   11277 logs.go:123] Gathering logs for coredns [2485dd829d80] ...
	I1205 11:14:40.013292   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2485dd829d80"
	I1205 11:14:40.025566   11277 logs.go:123] Gathering logs for coredns [e5907619273d] ...
	I1205 11:14:40.025580   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5907619273d"
	I1205 11:14:40.037297   11277 logs.go:123] Gathering logs for storage-provisioner [b6fb69ae99c1] ...
	I1205 11:14:40.037312   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb69ae99c1"
	I1205 11:14:42.551214   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:14:47.552072   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:14:47.552161   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:14:47.564259   11277 logs.go:282] 1 containers: [5ff2f80481fc]
	I1205 11:14:47.564335   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:14:47.577903   11277 logs.go:282] 1 containers: [296764cd3f5b]
	I1205 11:14:47.577958   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:14:47.589225   11277 logs.go:282] 4 containers: [2485dd829d80 e5907619273d e639ce1a7340 1ad49ed9e479]
	I1205 11:14:47.589295   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:14:47.612431   11277 logs.go:282] 1 containers: [b13c6b90b64e]
	I1205 11:14:47.612507   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:14:47.624528   11277 logs.go:282] 1 containers: [50e5bccb9a97]
	I1205 11:14:47.624585   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:14:47.635736   11277 logs.go:282] 1 containers: [c77a45d882b8]
	I1205 11:14:47.635796   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:14:47.647663   11277 logs.go:282] 0 containers: []
	W1205 11:14:47.647673   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:14:47.647743   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:14:47.660132   11277 logs.go:282] 1 containers: [b6fb69ae99c1]
	I1205 11:14:47.660151   11277 logs.go:123] Gathering logs for coredns [e5907619273d] ...
	I1205 11:14:47.660157   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5907619273d"
	I1205 11:14:47.674691   11277 logs.go:123] Gathering logs for coredns [e639ce1a7340] ...
	I1205 11:14:47.674701   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e639ce1a7340"
	I1205 11:14:47.686968   11277 logs.go:123] Gathering logs for storage-provisioner [b6fb69ae99c1] ...
	I1205 11:14:47.686980   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb69ae99c1"
	I1205 11:14:47.704415   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:14:47.704429   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:14:47.709599   11277 logs.go:123] Gathering logs for coredns [2485dd829d80] ...
	I1205 11:14:47.709611   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2485dd829d80"
	I1205 11:14:47.722320   11277 logs.go:123] Gathering logs for kube-scheduler [b13c6b90b64e] ...
	I1205 11:14:47.722333   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b13c6b90b64e"
	I1205 11:14:47.741762   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:14:47.741778   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:14:47.768352   11277 logs.go:123] Gathering logs for kube-apiserver [5ff2f80481fc] ...
	I1205 11:14:47.768372   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ff2f80481fc"
	I1205 11:14:47.784101   11277 logs.go:123] Gathering logs for etcd [296764cd3f5b] ...
	I1205 11:14:47.784118   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296764cd3f5b"
	I1205 11:14:47.806268   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:14:47.806283   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:14:47.844893   11277 logs.go:123] Gathering logs for coredns [1ad49ed9e479] ...
	I1205 11:14:47.844904   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ad49ed9e479"
	I1205 11:14:47.859325   11277 logs.go:123] Gathering logs for kube-proxy [50e5bccb9a97] ...
	I1205 11:14:47.859338   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50e5bccb9a97"
	I1205 11:14:47.872256   11277 logs.go:123] Gathering logs for kube-controller-manager [c77a45d882b8] ...
	I1205 11:14:47.872265   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c77a45d882b8"
	I1205 11:14:47.890289   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:14:47.890302   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:14:47.904356   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:14:47.904367   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:14:50.442828   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:14:55.445318   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:14:55.445770   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:14:55.480432   11277 logs.go:282] 1 containers: [5ff2f80481fc]
	I1205 11:14:55.480567   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:14:55.500591   11277 logs.go:282] 1 containers: [296764cd3f5b]
	I1205 11:14:55.500697   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:14:55.515641   11277 logs.go:282] 4 containers: [2485dd829d80 e5907619273d e639ce1a7340 1ad49ed9e479]
	I1205 11:14:55.515730   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:14:55.528930   11277 logs.go:282] 1 containers: [b13c6b90b64e]
	I1205 11:14:55.529003   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:14:55.540353   11277 logs.go:282] 1 containers: [50e5bccb9a97]
	I1205 11:14:55.540433   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:14:55.551245   11277 logs.go:282] 1 containers: [c77a45d882b8]
	I1205 11:14:55.551313   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:14:55.562207   11277 logs.go:282] 0 containers: []
	W1205 11:14:55.562219   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:14:55.562277   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:14:55.573750   11277 logs.go:282] 1 containers: [b6fb69ae99c1]
	I1205 11:14:55.573771   11277 logs.go:123] Gathering logs for kube-apiserver [5ff2f80481fc] ...
	I1205 11:14:55.573777   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ff2f80481fc"
	I1205 11:14:55.588993   11277 logs.go:123] Gathering logs for etcd [296764cd3f5b] ...
	I1205 11:14:55.589007   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296764cd3f5b"
	I1205 11:14:55.604116   11277 logs.go:123] Gathering logs for coredns [e5907619273d] ...
	I1205 11:14:55.604129   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5907619273d"
	I1205 11:14:55.617012   11277 logs.go:123] Gathering logs for kube-scheduler [b13c6b90b64e] ...
	I1205 11:14:55.617024   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b13c6b90b64e"
	I1205 11:14:55.632677   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:14:55.632689   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:14:55.657240   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:14:55.657247   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:14:55.669969   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:14:55.669984   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:14:55.704767   11277 logs.go:123] Gathering logs for coredns [e639ce1a7340] ...
	I1205 11:14:55.704783   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e639ce1a7340"
	I1205 11:14:55.717183   11277 logs.go:123] Gathering logs for coredns [1ad49ed9e479] ...
	I1205 11:14:55.717197   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ad49ed9e479"
	I1205 11:14:55.733618   11277 logs.go:123] Gathering logs for kube-proxy [50e5bccb9a97] ...
	I1205 11:14:55.733631   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50e5bccb9a97"
	I1205 11:14:55.746304   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:14:55.746316   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:14:55.783407   11277 logs.go:123] Gathering logs for coredns [2485dd829d80] ...
	I1205 11:14:55.783418   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2485dd829d80"
	I1205 11:14:55.799469   11277 logs.go:123] Gathering logs for kube-controller-manager [c77a45d882b8] ...
	I1205 11:14:55.799477   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c77a45d882b8"
	I1205 11:14:55.817906   11277 logs.go:123] Gathering logs for storage-provisioner [b6fb69ae99c1] ...
	I1205 11:14:55.817915   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb69ae99c1"
	I1205 11:14:55.830396   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:14:55.830410   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:14:58.337021   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:15:03.339891   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:15:03.340419   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:15:03.380175   11277 logs.go:282] 1 containers: [5ff2f80481fc]
	I1205 11:15:03.380324   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:15:03.403074   11277 logs.go:282] 1 containers: [296764cd3f5b]
	I1205 11:15:03.403192   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:15:03.419516   11277 logs.go:282] 4 containers: [2485dd829d80 e5907619273d e639ce1a7340 1ad49ed9e479]
	I1205 11:15:03.419607   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:15:03.432240   11277 logs.go:282] 1 containers: [b13c6b90b64e]
	I1205 11:15:03.432317   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:15:03.443400   11277 logs.go:282] 1 containers: [50e5bccb9a97]
	I1205 11:15:03.443481   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:15:03.454427   11277 logs.go:282] 1 containers: [c77a45d882b8]
	I1205 11:15:03.454503   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:15:03.465610   11277 logs.go:282] 0 containers: []
	W1205 11:15:03.465620   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:15:03.465683   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:15:03.477041   11277 logs.go:282] 1 containers: [b6fb69ae99c1]
	I1205 11:15:03.477059   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:15:03.477066   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:15:03.481956   11277 logs.go:123] Gathering logs for kube-apiserver [5ff2f80481fc] ...
	I1205 11:15:03.481965   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ff2f80481fc"
	I1205 11:15:03.496279   11277 logs.go:123] Gathering logs for coredns [2485dd829d80] ...
	I1205 11:15:03.496292   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2485dd829d80"
	I1205 11:15:03.509825   11277 logs.go:123] Gathering logs for coredns [1ad49ed9e479] ...
	I1205 11:15:03.509835   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ad49ed9e479"
	I1205 11:15:03.521971   11277 logs.go:123] Gathering logs for kube-scheduler [b13c6b90b64e] ...
	I1205 11:15:03.521984   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b13c6b90b64e"
	I1205 11:15:03.539098   11277 logs.go:123] Gathering logs for kube-proxy [50e5bccb9a97] ...
	I1205 11:15:03.539108   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50e5bccb9a97"
	I1205 11:15:03.551413   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:15:03.551423   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:15:03.563330   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:15:03.563345   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:15:03.597277   11277 logs.go:123] Gathering logs for coredns [e639ce1a7340] ...
	I1205 11:15:03.597284   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e639ce1a7340"
	I1205 11:15:03.609432   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:15:03.609444   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:15:03.634025   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:15:03.634033   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:15:03.682376   11277 logs.go:123] Gathering logs for etcd [296764cd3f5b] ...
	I1205 11:15:03.682386   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296764cd3f5b"
	I1205 11:15:03.707467   11277 logs.go:123] Gathering logs for coredns [e5907619273d] ...
	I1205 11:15:03.707480   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5907619273d"
	I1205 11:15:03.719853   11277 logs.go:123] Gathering logs for kube-controller-manager [c77a45d882b8] ...
	I1205 11:15:03.719865   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c77a45d882b8"
	I1205 11:15:03.738377   11277 logs.go:123] Gathering logs for storage-provisioner [b6fb69ae99c1] ...
	I1205 11:15:03.738386   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb69ae99c1"
	I1205 11:15:06.252704   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:15:11.255362   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:15:11.255438   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:15:11.266667   11277 logs.go:282] 1 containers: [5ff2f80481fc]
	I1205 11:15:11.266730   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:15:11.278429   11277 logs.go:282] 1 containers: [296764cd3f5b]
	I1205 11:15:11.278518   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:15:11.290473   11277 logs.go:282] 4 containers: [2485dd829d80 e5907619273d e639ce1a7340 1ad49ed9e479]
	I1205 11:15:11.290554   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:15:11.306153   11277 logs.go:282] 1 containers: [b13c6b90b64e]
	I1205 11:15:11.306226   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:15:11.318511   11277 logs.go:282] 1 containers: [50e5bccb9a97]
	I1205 11:15:11.318586   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:15:11.330692   11277 logs.go:282] 1 containers: [c77a45d882b8]
	I1205 11:15:11.330753   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:15:11.341392   11277 logs.go:282] 0 containers: []
	W1205 11:15:11.341401   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:15:11.341471   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:15:11.352391   11277 logs.go:282] 1 containers: [b6fb69ae99c1]
	I1205 11:15:11.352410   11277 logs.go:123] Gathering logs for kube-proxy [50e5bccb9a97] ...
	I1205 11:15:11.352416   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50e5bccb9a97"
	I1205 11:15:11.365371   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:15:11.365384   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:15:11.401621   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:15:11.401643   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:15:11.407100   11277 logs.go:123] Gathering logs for kube-apiserver [5ff2f80481fc] ...
	I1205 11:15:11.407111   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ff2f80481fc"
	I1205 11:15:11.422218   11277 logs.go:123] Gathering logs for etcd [296764cd3f5b] ...
	I1205 11:15:11.422232   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296764cd3f5b"
	I1205 11:15:11.438192   11277 logs.go:123] Gathering logs for kube-scheduler [b13c6b90b64e] ...
	I1205 11:15:11.438205   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b13c6b90b64e"
	I1205 11:15:11.453942   11277 logs.go:123] Gathering logs for storage-provisioner [b6fb69ae99c1] ...
	I1205 11:15:11.453954   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb69ae99c1"
	I1205 11:15:11.467506   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:15:11.467522   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:15:11.493026   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:15:11.493045   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:15:11.532058   11277 logs.go:123] Gathering logs for coredns [2485dd829d80] ...
	I1205 11:15:11.532073   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2485dd829d80"
	I1205 11:15:11.544714   11277 logs.go:123] Gathering logs for coredns [e5907619273d] ...
	I1205 11:15:11.544727   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5907619273d"
	I1205 11:15:11.557525   11277 logs.go:123] Gathering logs for coredns [e639ce1a7340] ...
	I1205 11:15:11.557537   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e639ce1a7340"
	I1205 11:15:11.574279   11277 logs.go:123] Gathering logs for coredns [1ad49ed9e479] ...
	I1205 11:15:11.574292   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ad49ed9e479"
	I1205 11:15:11.587349   11277 logs.go:123] Gathering logs for kube-controller-manager [c77a45d882b8] ...
	I1205 11:15:11.587362   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c77a45d882b8"
	I1205 11:15:11.607418   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:15:11.607430   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:15:14.122956   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:15:19.125408   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:15:19.125960   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1205 11:15:19.171272   11277 logs.go:282] 1 containers: [5ff2f80481fc]
	I1205 11:15:19.171424   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1205 11:15:19.190474   11277 logs.go:282] 1 containers: [296764cd3f5b]
	I1205 11:15:19.190583   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1205 11:15:19.204853   11277 logs.go:282] 4 containers: [2485dd829d80 e5907619273d e639ce1a7340 1ad49ed9e479]
	I1205 11:15:19.204954   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1205 11:15:19.217266   11277 logs.go:282] 1 containers: [b13c6b90b64e]
	I1205 11:15:19.217342   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1205 11:15:19.228568   11277 logs.go:282] 1 containers: [50e5bccb9a97]
	I1205 11:15:19.228632   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1205 11:15:19.238989   11277 logs.go:282] 1 containers: [c77a45d882b8]
	I1205 11:15:19.239060   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1205 11:15:19.249149   11277 logs.go:282] 0 containers: []
	W1205 11:15:19.249160   11277 logs.go:284] No container was found matching "kindnet"
	I1205 11:15:19.249215   11277 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1205 11:15:19.259664   11277 logs.go:282] 1 containers: [b6fb69ae99c1]
	I1205 11:15:19.259680   11277 logs.go:123] Gathering logs for storage-provisioner [b6fb69ae99c1] ...
	I1205 11:15:19.259686   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6fb69ae99c1"
	I1205 11:15:19.270917   11277 logs.go:123] Gathering logs for coredns [2485dd829d80] ...
	I1205 11:15:19.270929   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2485dd829d80"
	I1205 11:15:19.282649   11277 logs.go:123] Gathering logs for coredns [e5907619273d] ...
	I1205 11:15:19.282663   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5907619273d"
	I1205 11:15:19.294251   11277 logs.go:123] Gathering logs for coredns [e639ce1a7340] ...
	I1205 11:15:19.294260   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e639ce1a7340"
	I1205 11:15:19.306619   11277 logs.go:123] Gathering logs for coredns [1ad49ed9e479] ...
	I1205 11:15:19.306633   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1ad49ed9e479"
	I1205 11:15:19.319218   11277 logs.go:123] Gathering logs for kubelet ...
	I1205 11:15:19.319232   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 11:15:19.351448   11277 logs.go:123] Gathering logs for kube-apiserver [5ff2f80481fc] ...
	I1205 11:15:19.351455   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ff2f80481fc"
	I1205 11:15:19.373688   11277 logs.go:123] Gathering logs for kube-scheduler [b13c6b90b64e] ...
	I1205 11:15:19.373698   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b13c6b90b64e"
	I1205 11:15:19.388402   11277 logs.go:123] Gathering logs for Docker ...
	I1205 11:15:19.388414   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1205 11:15:19.411316   11277 logs.go:123] Gathering logs for dmesg ...
	I1205 11:15:19.411325   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 11:15:19.415373   11277 logs.go:123] Gathering logs for etcd [296764cd3f5b] ...
	I1205 11:15:19.415380   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 296764cd3f5b"
	I1205 11:15:19.429446   11277 logs.go:123] Gathering logs for kube-proxy [50e5bccb9a97] ...
	I1205 11:15:19.429458   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50e5bccb9a97"
	I1205 11:15:19.441595   11277 logs.go:123] Gathering logs for container status ...
	I1205 11:15:19.441609   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 11:15:19.453856   11277 logs.go:123] Gathering logs for describe nodes ...
	I1205 11:15:19.453867   11277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 11:15:19.490192   11277 logs.go:123] Gathering logs for kube-controller-manager [c77a45d882b8] ...
	I1205 11:15:19.490206   11277 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c77a45d882b8"
	I1205 11:15:22.010406   11277 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1205 11:15:27.013157   11277 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 11:15:27.018512   11277 out.go:201] 
	W1205 11:15:27.021552   11277 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1205 11:15:27.021558   11277 out.go:270] * 
	* 
	W1205 11:15:27.022053   11277 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:15:27.033488   11277 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-616000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (573.78s)

                                                
                                    
x
+
TestPause/serial/Start (9.96s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-637000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-637000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.905847958s)

                                                
                                                
-- stdout --
	* [pause-637000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-637000" primary control-plane node in "pause-637000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-637000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-637000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-637000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-637000 -n pause-637000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-637000 -n pause-637000: exit status 7 (54.581542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-637000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-589000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-589000 --driver=qemu2 : exit status 80 (9.824963625s)

                                                
                                                
-- stdout --
	* [NoKubernetes-589000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-589000" primary control-plane node in "NoKubernetes-589000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-589000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-589000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-589000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-589000 -n NoKubernetes-589000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-589000 -n NoKubernetes-589000: exit status 7 (58.364417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-589000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-589000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-589000 --no-kubernetes --driver=qemu2 : exit status 80 (5.254711834s)

                                                
                                                
-- stdout --
	* [NoKubernetes-589000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-589000
	* Restarting existing qemu2 VM for "NoKubernetes-589000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-589000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-589000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-589000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-589000 -n NoKubernetes-589000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-589000 -n NoKubernetes-589000: exit status 7 (65.897416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-589000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-589000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-589000 --no-kubernetes --driver=qemu2 : exit status 80 (5.257817542s)

                                                
                                                
-- stdout --
	* [NoKubernetes-589000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-589000
	* Restarting existing qemu2 VM for "NoKubernetes-589000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-589000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-589000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-589000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-589000 -n NoKubernetes-589000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-589000 -n NoKubernetes-589000: exit status 7 (69.99575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-589000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-589000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-589000 --driver=qemu2 : exit status 80 (5.292058375s)

                                                
                                                
-- stdout --
	* [NoKubernetes-589000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-589000
	* Restarting existing qemu2 VM for "NoKubernetes-589000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-589000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-589000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-589000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-589000 -n NoKubernetes-589000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-589000 -n NoKubernetes-589000: exit status 7 (75.631708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-589000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-972000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-972000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.835539708s)

                                                
                                                
-- stdout --
	* [auto-972000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-972000" primary control-plane node in "auto-972000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-972000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:13:46.942413   11494 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:13:46.942563   11494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:13:46.942566   11494 out.go:358] Setting ErrFile to fd 2...
	I1205 11:13:46.942569   11494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:13:46.942700   11494 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:13:46.943970   11494 out.go:352] Setting JSON to false
	I1205 11:13:46.962254   11494 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6198,"bootTime":1733419828,"procs":544,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 11:13:46.962331   11494 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:13:46.967787   11494 out.go:177] * [auto-972000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:13:46.975784   11494 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 11:13:46.975873   11494 notify.go:220] Checking for updates...
	I1205 11:13:46.982703   11494 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 11:13:46.986635   11494 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:13:46.991176   11494 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:13:46.994842   11494 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	I1205 11:13:46.996286   11494 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:13:46.999060   11494 config.go:182] Loaded profile config "multinode-454000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:13:46.999133   11494 config.go:182] Loaded profile config "stopped-upgrade-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1205 11:13:46.999175   11494 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:13:47.003714   11494 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:13:47.009718   11494 start.go:297] selected driver: qemu2
	I1205 11:13:47.009723   11494 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:13:47.009728   11494 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:13:47.012104   11494 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:13:47.014713   11494 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:13:47.018641   11494 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:13:47.018658   11494 cni.go:84] Creating CNI manager for ""
	I1205 11:13:47.018678   11494 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:13:47.018682   11494 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 11:13:47.018709   11494 start.go:340] cluster config:
	{Name:auto-972000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:auto-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:13:47.023147   11494 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:13:47.031807   11494 out.go:177] * Starting "auto-972000" primary control-plane node in "auto-972000" cluster
	I1205 11:13:47.035656   11494 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:13:47.035671   11494 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:13:47.035680   11494 cache.go:56] Caching tarball of preloaded images
	I1205 11:13:47.035751   11494 preload.go:172] Found /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:13:47.035757   11494 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:13:47.035804   11494 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/auto-972000/config.json ...
	I1205 11:13:47.035818   11494 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/auto-972000/config.json: {Name:mk4bd4bc214dfac12b9a919998fd2ee6cc32dbfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:13:47.036166   11494 start.go:360] acquireMachinesLock for auto-972000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:13:47.036212   11494 start.go:364] duration metric: took 41.083µs to acquireMachinesLock for "auto-972000"
	I1205 11:13:47.036224   11494 start.go:93] Provisioning new machine with config: &{Name:auto-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:auto-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:13:47.036260   11494 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:13:47.044705   11494 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1205 11:13:47.059458   11494 start.go:159] libmachine.API.Create for "auto-972000" (driver="qemu2")
	I1205 11:13:47.059482   11494 client.go:168] LocalClient.Create starting
	I1205 11:13:47.059550   11494 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 11:13:47.059588   11494 main.go:141] libmachine: Decoding PEM data...
	I1205 11:13:47.059602   11494 main.go:141] libmachine: Parsing certificate...
	I1205 11:13:47.059638   11494 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 11:13:47.059667   11494 main.go:141] libmachine: Decoding PEM data...
	I1205 11:13:47.059675   11494 main.go:141] libmachine: Parsing certificate...
	I1205 11:13:47.060113   11494 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:13:47.218541   11494 main.go:141] libmachine: Creating SSH key...
	I1205 11:13:47.337109   11494 main.go:141] libmachine: Creating Disk image...
	I1205 11:13:47.337121   11494 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:13:47.337347   11494 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/auto-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/auto-972000/disk.qcow2
	I1205 11:13:47.347256   11494 main.go:141] libmachine: STDOUT: 
	I1205 11:13:47.347280   11494 main.go:141] libmachine: STDERR: 
	I1205 11:13:47.347342   11494 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/auto-972000/disk.qcow2 +20000M
	I1205 11:13:47.356080   11494 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:13:47.356098   11494 main.go:141] libmachine: STDERR: 
	I1205 11:13:47.356122   11494 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/auto-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/auto-972000/disk.qcow2
	I1205 11:13:47.356128   11494 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:13:47.356140   11494 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:13:47.356168   11494 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/auto-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/auto-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/auto-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:93:23:71:0b:ae -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/auto-972000/disk.qcow2
	I1205 11:13:47.358024   11494 main.go:141] libmachine: STDOUT: 
	I1205 11:13:47.358041   11494 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:13:47.358065   11494 client.go:171] duration metric: took 298.57625ms to LocalClient.Create
	I1205 11:13:49.360287   11494 start.go:128] duration metric: took 2.323981333s to createHost
	I1205 11:13:49.360393   11494 start.go:83] releasing machines lock for "auto-972000", held for 2.324164542s
	W1205 11:13:49.360446   11494 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:13:49.375879   11494 out.go:177] * Deleting "auto-972000" in qemu2 ...
	W1205 11:13:49.403966   11494 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:13:49.404002   11494 start.go:729] Will try again in 5 seconds ...
	I1205 11:13:54.406311   11494 start.go:360] acquireMachinesLock for auto-972000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:13:54.406963   11494 start.go:364] duration metric: took 517.959µs to acquireMachinesLock for "auto-972000"
	I1205 11:13:54.407109   11494 start.go:93] Provisioning new machine with config: &{Name:auto-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:auto-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:13:54.407446   11494 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:13:54.419023   11494 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1205 11:13:54.462296   11494 start.go:159] libmachine.API.Create for "auto-972000" (driver="qemu2")
	I1205 11:13:54.462346   11494 client.go:168] LocalClient.Create starting
	I1205 11:13:54.462470   11494 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 11:13:54.462561   11494 main.go:141] libmachine: Decoding PEM data...
	I1205 11:13:54.462579   11494 main.go:141] libmachine: Parsing certificate...
	I1205 11:13:54.462645   11494 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 11:13:54.462701   11494 main.go:141] libmachine: Decoding PEM data...
	I1205 11:13:54.462713   11494 main.go:141] libmachine: Parsing certificate...
	I1205 11:13:54.463234   11494 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:13:54.630949   11494 main.go:141] libmachine: Creating SSH key...
	I1205 11:13:54.677828   11494 main.go:141] libmachine: Creating Disk image...
	I1205 11:13:54.677833   11494 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:13:54.678064   11494 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/auto-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/auto-972000/disk.qcow2
	I1205 11:13:54.688280   11494 main.go:141] libmachine: STDOUT: 
	I1205 11:13:54.688298   11494 main.go:141] libmachine: STDERR: 
	I1205 11:13:54.688361   11494 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/auto-972000/disk.qcow2 +20000M
	I1205 11:13:54.697054   11494 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:13:54.697071   11494 main.go:141] libmachine: STDERR: 
	I1205 11:13:54.697085   11494 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/auto-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/auto-972000/disk.qcow2
	I1205 11:13:54.697089   11494 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:13:54.697099   11494 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:13:54.697126   11494 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/auto-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/auto-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/auto-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:84:3a:15:8d:33 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/auto-972000/disk.qcow2
	I1205 11:13:54.698992   11494 main.go:141] libmachine: STDOUT: 
	I1205 11:13:54.699007   11494 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:13:54.699025   11494 client.go:171] duration metric: took 236.671333ms to LocalClient.Create
	I1205 11:13:56.701246   11494 start.go:128] duration metric: took 2.293748125s to createHost
	I1205 11:13:56.701347   11494 start.go:83] releasing machines lock for "auto-972000", held for 2.294353583s
	W1205 11:13:56.701912   11494 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:13:56.711512   11494 out.go:201] 
	W1205 11:13:56.718682   11494 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:13:56.718708   11494 out.go:270] * 
	* 
	W1205 11:13:56.721205   11494 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:13:56.732533   11494 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (10s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-972000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-972000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (10.000227292s)

                                                
                                                
-- stdout --
	* [kindnet-972000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-972000" primary control-plane node in "kindnet-972000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-972000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:13:59.195465   11608 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:13:59.195646   11608 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:13:59.195649   11608 out.go:358] Setting ErrFile to fd 2...
	I1205 11:13:59.195651   11608 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:13:59.195785   11608 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:13:59.196969   11608 out.go:352] Setting JSON to false
	I1205 11:13:59.215010   11608 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6211,"bootTime":1733419828,"procs":547,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 11:13:59.215094   11608 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:13:59.222143   11608 out.go:177] * [kindnet-972000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:13:59.230098   11608 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 11:13:59.230146   11608 notify.go:220] Checking for updates...
	I1205 11:13:59.237019   11608 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 11:13:59.241004   11608 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:13:59.244989   11608 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:13:59.248055   11608 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	I1205 11:13:59.251064   11608 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:13:59.254394   11608 config.go:182] Loaded profile config "multinode-454000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:13:59.254473   11608 config.go:182] Loaded profile config "stopped-upgrade-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1205 11:13:59.254529   11608 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:13:59.257947   11608 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:13:59.265018   11608 start.go:297] selected driver: qemu2
	I1205 11:13:59.265025   11608 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:13:59.265036   11608 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:13:59.267656   11608 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:13:59.272007   11608 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:13:59.275044   11608 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:13:59.275061   11608 cni.go:84] Creating CNI manager for "kindnet"
	I1205 11:13:59.275066   11608 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 11:13:59.275115   11608 start.go:340] cluster config:
	{Name:kindnet-972000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kindnet-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:13:59.279924   11608 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:13:59.288003   11608 out.go:177] * Starting "kindnet-972000" primary control-plane node in "kindnet-972000" cluster
	I1205 11:13:59.290999   11608 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:13:59.291016   11608 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:13:59.291028   11608 cache.go:56] Caching tarball of preloaded images
	I1205 11:13:59.291097   11608 preload.go:172] Found /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:13:59.291103   11608 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:13:59.291171   11608 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/kindnet-972000/config.json ...
	I1205 11:13:59.291187   11608 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/kindnet-972000/config.json: {Name:mke65295dd464f56b509f69a5d0fdc13f0b079be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:13:59.291647   11608 start.go:360] acquireMachinesLock for kindnet-972000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:13:59.291688   11608 start.go:364] duration metric: took 36.625µs to acquireMachinesLock for "kindnet-972000"
	I1205 11:13:59.291698   11608 start.go:93] Provisioning new machine with config: &{Name:kindnet-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kindnet-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:13:59.291727   11608 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:13:59.298944   11608 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1205 11:13:59.314422   11608 start.go:159] libmachine.API.Create for "kindnet-972000" (driver="qemu2")
	I1205 11:13:59.314452   11608 client.go:168] LocalClient.Create starting
	I1205 11:13:59.314539   11608 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 11:13:59.314575   11608 main.go:141] libmachine: Decoding PEM data...
	I1205 11:13:59.314588   11608 main.go:141] libmachine: Parsing certificate...
	I1205 11:13:59.314625   11608 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 11:13:59.314653   11608 main.go:141] libmachine: Decoding PEM data...
	I1205 11:13:59.314660   11608 main.go:141] libmachine: Parsing certificate...
	I1205 11:13:59.315202   11608 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:13:59.473319   11608 main.go:141] libmachine: Creating SSH key...
	I1205 11:13:59.584190   11608 main.go:141] libmachine: Creating Disk image...
	I1205 11:13:59.584199   11608 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:13:59.584434   11608 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kindnet-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kindnet-972000/disk.qcow2
	I1205 11:13:59.594604   11608 main.go:141] libmachine: STDOUT: 
	I1205 11:13:59.594628   11608 main.go:141] libmachine: STDERR: 
	I1205 11:13:59.594703   11608 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kindnet-972000/disk.qcow2 +20000M
	I1205 11:13:59.603515   11608 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:13:59.603532   11608 main.go:141] libmachine: STDERR: 
	I1205 11:13:59.603547   11608 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kindnet-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kindnet-972000/disk.qcow2
	I1205 11:13:59.603553   11608 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:13:59.603568   11608 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:13:59.603598   11608 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kindnet-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kindnet-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kindnet-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:90:3d:3a:be:9c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kindnet-972000/disk.qcow2
	I1205 11:13:59.605559   11608 main.go:141] libmachine: STDOUT: 
	I1205 11:13:59.605574   11608 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:13:59.605592   11608 client.go:171] duration metric: took 291.134334ms to LocalClient.Create
	I1205 11:14:01.607292   11608 start.go:128] duration metric: took 2.315527584s to createHost
	I1205 11:14:01.607371   11608 start.go:83] releasing machines lock for "kindnet-972000", held for 2.315668s
	W1205 11:14:01.607505   11608 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:14:01.617848   11608 out.go:177] * Deleting "kindnet-972000" in qemu2 ...
	W1205 11:14:01.649073   11608 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:14:01.649104   11608 start.go:729] Will try again in 5 seconds ...
	I1205 11:14:06.651352   11608 start.go:360] acquireMachinesLock for kindnet-972000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:14:06.652025   11608 start.go:364] duration metric: took 569.125µs to acquireMachinesLock for "kindnet-972000"
	I1205 11:14:06.652165   11608 start.go:93] Provisioning new machine with config: &{Name:kindnet-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kindnet-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:14:06.652482   11608 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:14:06.658119   11608 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1205 11:14:06.707262   11608 start.go:159] libmachine.API.Create for "kindnet-972000" (driver="qemu2")
	I1205 11:14:06.707312   11608 client.go:168] LocalClient.Create starting
	I1205 11:14:06.707463   11608 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 11:14:06.707551   11608 main.go:141] libmachine: Decoding PEM data...
	I1205 11:14:06.707572   11608 main.go:141] libmachine: Parsing certificate...
	I1205 11:14:06.707633   11608 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 11:14:06.707690   11608 main.go:141] libmachine: Decoding PEM data...
	I1205 11:14:06.707706   11608 main.go:141] libmachine: Parsing certificate...
	I1205 11:14:06.708327   11608 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:14:06.878394   11608 main.go:141] libmachine: Creating SSH key...
	I1205 11:14:07.104250   11608 main.go:141] libmachine: Creating Disk image...
	I1205 11:14:07.104265   11608 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:14:07.104517   11608 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kindnet-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kindnet-972000/disk.qcow2
	I1205 11:14:07.114879   11608 main.go:141] libmachine: STDOUT: 
	I1205 11:14:07.114904   11608 main.go:141] libmachine: STDERR: 
	I1205 11:14:07.114981   11608 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kindnet-972000/disk.qcow2 +20000M
	I1205 11:14:07.123774   11608 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:14:07.123793   11608 main.go:141] libmachine: STDERR: 
	I1205 11:14:07.123806   11608 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kindnet-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kindnet-972000/disk.qcow2
	I1205 11:14:07.123819   11608 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:14:07.123829   11608 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:14:07.123857   11608 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kindnet-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kindnet-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kindnet-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:77:7d:83:be:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kindnet-972000/disk.qcow2
	I1205 11:14:07.125835   11608 main.go:141] libmachine: STDOUT: 
	I1205 11:14:07.125849   11608 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:14:07.125863   11608 client.go:171] duration metric: took 418.544042ms to LocalClient.Create
	I1205 11:14:09.128049   11608 start.go:128] duration metric: took 2.475529625s to createHost
	I1205 11:14:09.128114   11608 start.go:83] releasing machines lock for "kindnet-972000", held for 2.476059166s
	W1205 11:14:09.128373   11608 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:14:09.139982   11608 out.go:201] 
	W1205 11:14:09.143985   11608 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:14:09.144028   11608 out.go:270] * 
	* 
	W1205 11:14:09.146224   11608 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:14:09.155051   11608 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (10.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (10.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-972000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-972000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (10.055191041s)

                                                
                                                
-- stdout --
	* [calico-972000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-972000" primary control-plane node in "calico-972000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-972000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:14:11.620844   11725 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:14:11.621002   11725 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:14:11.621006   11725 out.go:358] Setting ErrFile to fd 2...
	I1205 11:14:11.621008   11725 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:14:11.621153   11725 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:14:11.622383   11725 out.go:352] Setting JSON to false
	I1205 11:14:11.640679   11725 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6223,"bootTime":1733419828,"procs":548,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 11:14:11.640753   11725 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:14:11.646000   11725 out.go:177] * [calico-972000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:14:11.653953   11725 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 11:14:11.654013   11725 notify.go:220] Checking for updates...
	I1205 11:14:11.660953   11725 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 11:14:11.663907   11725 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:14:11.667934   11725 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:14:11.670964   11725 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	I1205 11:14:11.673882   11725 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:14:11.677429   11725 config.go:182] Loaded profile config "multinode-454000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:14:11.677512   11725 config.go:182] Loaded profile config "stopped-upgrade-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1205 11:14:11.677568   11725 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:14:11.680919   11725 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:14:11.687952   11725 start.go:297] selected driver: qemu2
	I1205 11:14:11.687960   11725 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:14:11.687973   11725 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:14:11.690544   11725 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:14:11.694970   11725 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:14:11.697978   11725 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:14:11.697993   11725 cni.go:84] Creating CNI manager for "calico"
	I1205 11:14:11.697997   11725 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I1205 11:14:11.698027   11725 start.go:340] cluster config:
	{Name:calico-972000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:calico-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:14:11.702778   11725 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:14:11.709886   11725 out.go:177] * Starting "calico-972000" primary control-plane node in "calico-972000" cluster
	I1205 11:14:11.713925   11725 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:14:11.713939   11725 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:14:11.713949   11725 cache.go:56] Caching tarball of preloaded images
	I1205 11:14:11.714020   11725 preload.go:172] Found /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:14:11.714025   11725 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:14:11.714078   11725 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/calico-972000/config.json ...
	I1205 11:14:11.714089   11725 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/calico-972000/config.json: {Name:mk755d934cb0ee7ca5455fb835c238d513ed93f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:14:11.714347   11725 start.go:360] acquireMachinesLock for calico-972000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:14:11.714394   11725 start.go:364] duration metric: took 41.5µs to acquireMachinesLock for "calico-972000"
	I1205 11:14:11.714406   11725 start.go:93] Provisioning new machine with config: &{Name:calico-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:calico-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:14:11.714439   11725 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:14:11.722909   11725 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1205 11:14:11.738977   11725 start.go:159] libmachine.API.Create for "calico-972000" (driver="qemu2")
	I1205 11:14:11.739014   11725 client.go:168] LocalClient.Create starting
	I1205 11:14:11.739091   11725 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 11:14:11.739127   11725 main.go:141] libmachine: Decoding PEM data...
	I1205 11:14:11.739139   11725 main.go:141] libmachine: Parsing certificate...
	I1205 11:14:11.739176   11725 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 11:14:11.739205   11725 main.go:141] libmachine: Decoding PEM data...
	I1205 11:14:11.739213   11725 main.go:141] libmachine: Parsing certificate...
	I1205 11:14:11.739596   11725 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:14:11.898548   11725 main.go:141] libmachine: Creating SSH key...
	I1205 11:14:12.083750   11725 main.go:141] libmachine: Creating Disk image...
	I1205 11:14:12.083760   11725 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:14:12.083979   11725 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/calico-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/calico-972000/disk.qcow2
	I1205 11:14:12.094355   11725 main.go:141] libmachine: STDOUT: 
	I1205 11:14:12.094380   11725 main.go:141] libmachine: STDERR: 
	I1205 11:14:12.094447   11725 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/calico-972000/disk.qcow2 +20000M
	I1205 11:14:12.103186   11725 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:14:12.103212   11725 main.go:141] libmachine: STDERR: 
	I1205 11:14:12.103237   11725 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/calico-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/calico-972000/disk.qcow2
	I1205 11:14:12.103244   11725 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:14:12.103256   11725 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:14:12.103290   11725 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/calico-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/calico-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/calico-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:cd:fe:c8:19:58 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/calico-972000/disk.qcow2
	I1205 11:14:12.105205   11725 main.go:141] libmachine: STDOUT: 
	I1205 11:14:12.105221   11725 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:14:12.105238   11725 client.go:171] duration metric: took 366.215833ms to LocalClient.Create
	I1205 11:14:14.107346   11725 start.go:128] duration metric: took 2.392883083s to createHost
	I1205 11:14:14.107419   11725 start.go:83] releasing machines lock for "calico-972000", held for 2.392986167s
	W1205 11:14:14.107441   11725 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:14:14.119200   11725 out.go:177] * Deleting "calico-972000" in qemu2 ...
	W1205 11:14:14.133820   11725 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:14:14.133829   11725 start.go:729] Will try again in 5 seconds ...
	I1205 11:14:19.136331   11725 start.go:360] acquireMachinesLock for calico-972000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:14:19.136910   11725 start.go:364] duration metric: took 444.042µs to acquireMachinesLock for "calico-972000"
	I1205 11:14:19.137046   11725 start.go:93] Provisioning new machine with config: &{Name:calico-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:calico-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:14:19.137354   11725 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:14:19.148454   11725 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1205 11:14:19.193639   11725 start.go:159] libmachine.API.Create for "calico-972000" (driver="qemu2")
	I1205 11:14:19.193685   11725 client.go:168] LocalClient.Create starting
	I1205 11:14:19.193870   11725 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 11:14:19.193950   11725 main.go:141] libmachine: Decoding PEM data...
	I1205 11:14:19.193969   11725 main.go:141] libmachine: Parsing certificate...
	I1205 11:14:19.194045   11725 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 11:14:19.194103   11725 main.go:141] libmachine: Decoding PEM data...
	I1205 11:14:19.194119   11725 main.go:141] libmachine: Parsing certificate...
	I1205 11:14:19.194813   11725 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:14:19.364626   11725 main.go:141] libmachine: Creating SSH key...
	I1205 11:14:19.571235   11725 main.go:141] libmachine: Creating Disk image...
	I1205 11:14:19.571247   11725 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:14:19.571458   11725 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/calico-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/calico-972000/disk.qcow2
	I1205 11:14:19.582486   11725 main.go:141] libmachine: STDOUT: 
	I1205 11:14:19.582523   11725 main.go:141] libmachine: STDERR: 
	I1205 11:14:19.582606   11725 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/calico-972000/disk.qcow2 +20000M
	I1205 11:14:19.591700   11725 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:14:19.591722   11725 main.go:141] libmachine: STDERR: 
	I1205 11:14:19.591736   11725 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/calico-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/calico-972000/disk.qcow2
	I1205 11:14:19.591742   11725 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:14:19.591752   11725 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:14:19.591807   11725 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/calico-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/calico-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/calico-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:a9:78:31:fc:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/calico-972000/disk.qcow2
	I1205 11:14:19.593707   11725 main.go:141] libmachine: STDOUT: 
	I1205 11:14:19.593719   11725 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:14:19.593733   11725 client.go:171] duration metric: took 400.042458ms to LocalClient.Create
	I1205 11:14:21.595949   11725 start.go:128] duration metric: took 2.45855475s to createHost
	I1205 11:14:21.596053   11725 start.go:83] releasing machines lock for "calico-972000", held for 2.459112667s
	W1205 11:14:21.596469   11725 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:14:21.607142   11725 out.go:201] 
	W1205 11:14:21.617234   11725 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:14:21.617269   11725 out.go:270] * 
	* 
	W1205 11:14:21.619977   11725 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:14:21.629152   11725 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (10.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-972000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-972000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.770096291s)

                                                
                                                
-- stdout --
	* [custom-flannel-972000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-972000" primary control-plane node in "custom-flannel-972000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-972000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:14:24.262554   11842 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:14:24.262714   11842 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:14:24.262718   11842 out.go:358] Setting ErrFile to fd 2...
	I1205 11:14:24.262721   11842 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:14:24.262881   11842 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:14:24.264345   11842 out.go:352] Setting JSON to false
	I1205 11:14:24.284818   11842 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6236,"bootTime":1733419828,"procs":544,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 11:14:24.284913   11842 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:14:24.290006   11842 out.go:177] * [custom-flannel-972000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:14:24.301251   11842 notify.go:220] Checking for updates...
	I1205 11:14:24.305054   11842 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 11:14:24.309114   11842 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 11:14:24.313133   11842 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:14:24.316149   11842 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:14:24.320087   11842 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	I1205 11:14:24.323101   11842 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:14:24.326491   11842 config.go:182] Loaded profile config "multinode-454000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:14:24.326561   11842 config.go:182] Loaded profile config "stopped-upgrade-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1205 11:14:24.326615   11842 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:14:24.331094   11842 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:14:24.338049   11842 start.go:297] selected driver: qemu2
	I1205 11:14:24.338055   11842 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:14:24.338065   11842 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:14:24.340483   11842 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:14:24.343090   11842 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:14:24.349161   11842 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:14:24.349181   11842 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1205 11:14:24.349190   11842 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1205 11:14:24.349227   11842 start.go:340] cluster config:
	{Name:custom-flannel-972000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:14:24.353729   11842 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:14:24.361103   11842 out.go:177] * Starting "custom-flannel-972000" primary control-plane node in "custom-flannel-972000" cluster
	I1205 11:14:24.365058   11842 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:14:24.365073   11842 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:14:24.365084   11842 cache.go:56] Caching tarball of preloaded images
	I1205 11:14:24.365155   11842 preload.go:172] Found /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:14:24.365160   11842 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:14:24.365212   11842 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/custom-flannel-972000/config.json ...
	I1205 11:14:24.365222   11842 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/custom-flannel-972000/config.json: {Name:mk73dc0dafed9d888947620007141eb686ba5797 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:14:24.365591   11842 start.go:360] acquireMachinesLock for custom-flannel-972000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:14:24.365638   11842 start.go:364] duration metric: took 37.666µs to acquireMachinesLock for "custom-flannel-972000"
	I1205 11:14:24.365649   11842 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:14:24.365674   11842 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:14:24.370045   11842 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1205 11:14:24.384435   11842 start.go:159] libmachine.API.Create for "custom-flannel-972000" (driver="qemu2")
	I1205 11:14:24.384460   11842 client.go:168] LocalClient.Create starting
	I1205 11:14:24.384530   11842 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 11:14:24.384570   11842 main.go:141] libmachine: Decoding PEM data...
	I1205 11:14:24.384580   11842 main.go:141] libmachine: Parsing certificate...
	I1205 11:14:24.384615   11842 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 11:14:24.384645   11842 main.go:141] libmachine: Decoding PEM data...
	I1205 11:14:24.384653   11842 main.go:141] libmachine: Parsing certificate...
	I1205 11:14:24.385093   11842 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:14:24.545823   11842 main.go:141] libmachine: Creating SSH key...
	I1205 11:14:24.602885   11842 main.go:141] libmachine: Creating Disk image...
	I1205 11:14:24.602893   11842 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:14:24.603101   11842 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/custom-flannel-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/custom-flannel-972000/disk.qcow2
	I1205 11:14:24.612945   11842 main.go:141] libmachine: STDOUT: 
	I1205 11:14:24.612968   11842 main.go:141] libmachine: STDERR: 
	I1205 11:14:24.613043   11842 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/custom-flannel-972000/disk.qcow2 +20000M
	I1205 11:14:24.622208   11842 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:14:24.622231   11842 main.go:141] libmachine: STDERR: 
	I1205 11:14:24.622254   11842 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/custom-flannel-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/custom-flannel-972000/disk.qcow2
	I1205 11:14:24.622260   11842 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:14:24.622271   11842 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:14:24.622305   11842 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/custom-flannel-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/custom-flannel-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/custom-flannel-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:61:91:7b:05:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/custom-flannel-972000/disk.qcow2
	I1205 11:14:24.624162   11842 main.go:141] libmachine: STDOUT: 
	I1205 11:14:24.624176   11842 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:14:24.624194   11842 client.go:171] duration metric: took 239.72825ms to LocalClient.Create
	I1205 11:14:26.626416   11842 start.go:128] duration metric: took 2.260701458s to createHost
	I1205 11:14:26.626489   11842 start.go:83] releasing machines lock for "custom-flannel-972000", held for 2.260835459s
	W1205 11:14:26.626590   11842 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:14:26.636922   11842 out.go:177] * Deleting "custom-flannel-972000" in qemu2 ...
	W1205 11:14:26.670168   11842 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:14:26.670209   11842 start.go:729] Will try again in 5 seconds ...
	I1205 11:14:31.672533   11842 start.go:360] acquireMachinesLock for custom-flannel-972000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:14:31.673210   11842 start.go:364] duration metric: took 530.084µs to acquireMachinesLock for "custom-flannel-972000"
	I1205 11:14:31.673403   11842 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:14:31.673644   11842 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:14:31.683004   11842 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1205 11:14:31.722378   11842 start.go:159] libmachine.API.Create for "custom-flannel-972000" (driver="qemu2")
	I1205 11:14:31.722431   11842 client.go:168] LocalClient.Create starting
	I1205 11:14:31.722577   11842 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 11:14:31.722671   11842 main.go:141] libmachine: Decoding PEM data...
	I1205 11:14:31.722687   11842 main.go:141] libmachine: Parsing certificate...
	I1205 11:14:31.722740   11842 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 11:14:31.722790   11842 main.go:141] libmachine: Decoding PEM data...
	I1205 11:14:31.722803   11842 main.go:141] libmachine: Parsing certificate...
	I1205 11:14:31.723485   11842 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:14:31.892129   11842 main.go:141] libmachine: Creating SSH key...
	I1205 11:14:31.930712   11842 main.go:141] libmachine: Creating Disk image...
	I1205 11:14:31.930719   11842 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:14:31.930973   11842 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/custom-flannel-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/custom-flannel-972000/disk.qcow2
	I1205 11:14:31.942067   11842 main.go:141] libmachine: STDOUT: 
	I1205 11:14:31.942101   11842 main.go:141] libmachine: STDERR: 
	I1205 11:14:31.942200   11842 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/custom-flannel-972000/disk.qcow2 +20000M
	I1205 11:14:31.952497   11842 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:14:31.952528   11842 main.go:141] libmachine: STDERR: 
	I1205 11:14:31.952547   11842 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/custom-flannel-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/custom-flannel-972000/disk.qcow2
	I1205 11:14:31.952554   11842 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:14:31.952567   11842 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:14:31.952597   11842 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/custom-flannel-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/custom-flannel-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/custom-flannel-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:09:aa:bc:67:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/custom-flannel-972000/disk.qcow2
	I1205 11:14:31.954884   11842 main.go:141] libmachine: STDOUT: 
	I1205 11:14:31.954899   11842 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:14:31.954911   11842 client.go:171] duration metric: took 232.472458ms to LocalClient.Create
	I1205 11:14:33.957131   11842 start.go:128] duration metric: took 2.283407708s to createHost
	I1205 11:14:33.957199   11842 start.go:83] releasing machines lock for "custom-flannel-972000", held for 2.283924792s
	W1205 11:14:33.957721   11842 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:14:33.970425   11842 out.go:201] 
	W1205 11:14:33.975440   11842 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:14:33.975483   11842 out.go:270] * 
	* 
	W1205 11:14:33.977548   11842 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:14:33.986426   11842 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-972000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-972000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.878642875s)

                                                
                                                
-- stdout --
	* [false-972000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-972000" primary control-plane node in "false-972000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-972000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:14:36.564682   11959 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:14:36.564823   11959 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:14:36.564826   11959 out.go:358] Setting ErrFile to fd 2...
	I1205 11:14:36.564828   11959 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:14:36.564973   11959 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:14:36.566200   11959 out.go:352] Setting JSON to false
	I1205 11:14:36.583971   11959 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6248,"bootTime":1733419828,"procs":544,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 11:14:36.584048   11959 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:14:36.588707   11959 out.go:177] * [false-972000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:14:36.596897   11959 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 11:14:36.596939   11959 notify.go:220] Checking for updates...
	I1205 11:14:36.603789   11959 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 11:14:36.606804   11959 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:14:36.609811   11959 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:14:36.612798   11959 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	I1205 11:14:36.615818   11959 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:14:36.619184   11959 config.go:182] Loaded profile config "multinode-454000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:14:36.619258   11959 config.go:182] Loaded profile config "stopped-upgrade-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1205 11:14:36.619315   11959 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:14:36.622772   11959 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:14:36.628796   11959 start.go:297] selected driver: qemu2
	I1205 11:14:36.628803   11959 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:14:36.628811   11959 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:14:36.631301   11959 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:14:36.635758   11959 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:14:36.638903   11959 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:14:36.638921   11959 cni.go:84] Creating CNI manager for "false"
	I1205 11:14:36.638946   11959 start.go:340] cluster config:
	{Name:false-972000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:false-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:14:36.643601   11959 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:14:36.651819   11959 out.go:177] * Starting "false-972000" primary control-plane node in "false-972000" cluster
	I1205 11:14:36.655692   11959 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:14:36.655707   11959 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:14:36.655720   11959 cache.go:56] Caching tarball of preloaded images
	I1205 11:14:36.655799   11959 preload.go:172] Found /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:14:36.655805   11959 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:14:36.655861   11959 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/false-972000/config.json ...
	I1205 11:14:36.655878   11959 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/false-972000/config.json: {Name:mk46c650a1ad251fab74446c2607d6e6301b27f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:14:36.656353   11959 start.go:360] acquireMachinesLock for false-972000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:14:36.656395   11959 start.go:364] duration metric: took 37µs to acquireMachinesLock for "false-972000"
	I1205 11:14:36.656405   11959 start.go:93] Provisioning new machine with config: &{Name:false-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:false-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:14:36.656427   11959 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:14:36.660820   11959 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1205 11:14:36.676060   11959 start.go:159] libmachine.API.Create for "false-972000" (driver="qemu2")
	I1205 11:14:36.676088   11959 client.go:168] LocalClient.Create starting
	I1205 11:14:36.676153   11959 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 11:14:36.676192   11959 main.go:141] libmachine: Decoding PEM data...
	I1205 11:14:36.676203   11959 main.go:141] libmachine: Parsing certificate...
	I1205 11:14:36.676238   11959 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 11:14:36.676266   11959 main.go:141] libmachine: Decoding PEM data...
	I1205 11:14:36.676274   11959 main.go:141] libmachine: Parsing certificate...
	I1205 11:14:36.676647   11959 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:14:36.835789   11959 main.go:141] libmachine: Creating SSH key...
	I1205 11:14:37.030660   11959 main.go:141] libmachine: Creating Disk image...
	I1205 11:14:37.030672   11959 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:14:37.030907   11959 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/false-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/false-972000/disk.qcow2
	I1205 11:14:37.041499   11959 main.go:141] libmachine: STDOUT: 
	I1205 11:14:37.041526   11959 main.go:141] libmachine: STDERR: 
	I1205 11:14:37.041587   11959 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/false-972000/disk.qcow2 +20000M
	I1205 11:14:37.050441   11959 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:14:37.050499   11959 main.go:141] libmachine: STDERR: 
	I1205 11:14:37.050515   11959 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/false-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/false-972000/disk.qcow2
	I1205 11:14:37.050519   11959 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:14:37.050531   11959 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:14:37.050564   11959 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/false-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/false-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/false-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:4d:b4:6f:76:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/false-972000/disk.qcow2
	I1205 11:14:37.052509   11959 main.go:141] libmachine: STDOUT: 
	I1205 11:14:37.052558   11959 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:14:37.052579   11959 client.go:171] duration metric: took 376.484792ms to LocalClient.Create
	I1205 11:14:39.054888   11959 start.go:128] duration metric: took 2.398427708s to createHost
	I1205 11:14:39.054957   11959 start.go:83] releasing machines lock for "false-972000", held for 2.398548167s
	W1205 11:14:39.055020   11959 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:14:39.071183   11959 out.go:177] * Deleting "false-972000" in qemu2 ...
	W1205 11:14:39.096144   11959 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:14:39.096173   11959 start.go:729] Will try again in 5 seconds ...
	I1205 11:14:44.098422   11959 start.go:360] acquireMachinesLock for false-972000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:14:44.098871   11959 start.go:364] duration metric: took 340.584µs to acquireMachinesLock for "false-972000"
	I1205 11:14:44.098964   11959 start.go:93] Provisioning new machine with config: &{Name:false-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:false-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:14:44.099178   11959 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:14:44.107683   11959 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1205 11:14:44.141024   11959 start.go:159] libmachine.API.Create for "false-972000" (driver="qemu2")
	I1205 11:14:44.141069   11959 client.go:168] LocalClient.Create starting
	I1205 11:14:44.141202   11959 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 11:14:44.141267   11959 main.go:141] libmachine: Decoding PEM data...
	I1205 11:14:44.141279   11959 main.go:141] libmachine: Parsing certificate...
	I1205 11:14:44.141329   11959 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 11:14:44.141372   11959 main.go:141] libmachine: Decoding PEM data...
	I1205 11:14:44.141381   11959 main.go:141] libmachine: Parsing certificate...
	I1205 11:14:44.141948   11959 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:14:44.305227   11959 main.go:141] libmachine: Creating SSH key...
	I1205 11:14:44.345279   11959 main.go:141] libmachine: Creating Disk image...
	I1205 11:14:44.345287   11959 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:14:44.345494   11959 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/false-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/false-972000/disk.qcow2
	I1205 11:14:44.355336   11959 main.go:141] libmachine: STDOUT: 
	I1205 11:14:44.355357   11959 main.go:141] libmachine: STDERR: 
	I1205 11:14:44.355427   11959 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/false-972000/disk.qcow2 +20000M
	I1205 11:14:44.364122   11959 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:14:44.364184   11959 main.go:141] libmachine: STDERR: 
	I1205 11:14:44.364196   11959 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/false-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/false-972000/disk.qcow2
	I1205 11:14:44.364204   11959 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:14:44.364214   11959 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:14:44.364241   11959 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/false-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/false-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/false-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:e7:66:9b:6d:89 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/false-972000/disk.qcow2
	I1205 11:14:44.366083   11959 main.go:141] libmachine: STDOUT: 
	I1205 11:14:44.366145   11959 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:14:44.366159   11959 client.go:171] duration metric: took 225.082166ms to LocalClient.Create
	I1205 11:14:46.368377   11959 start.go:128] duration metric: took 2.26908425s to createHost
	I1205 11:14:46.368451   11959 start.go:83] releasing machines lock for "false-972000", held for 2.269531208s
	W1205 11:14:46.368915   11959 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:14:46.379636   11959 out.go:201] 
	W1205 11:14:46.386702   11959 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:14:46.386739   11959 out.go:270] * 
	* 
	W1205 11:14:46.389644   11959 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:14:46.398525   11959 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-972000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-972000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.954967s)

                                                
                                                
-- stdout --
	* [enable-default-cni-972000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-972000" primary control-plane node in "enable-default-cni-972000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-972000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:14:48.721374   12069 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:14:48.721537   12069 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:14:48.721540   12069 out.go:358] Setting ErrFile to fd 2...
	I1205 11:14:48.721543   12069 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:14:48.721654   12069 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:14:48.722846   12069 out.go:352] Setting JSON to false
	I1205 11:14:48.742165   12069 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6260,"bootTime":1733419828,"procs":544,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 11:14:48.742236   12069 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:14:48.749477   12069 out.go:177] * [enable-default-cni-972000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:14:48.757431   12069 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 11:14:48.757506   12069 notify.go:220] Checking for updates...
	I1205 11:14:48.768918   12069 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 11:14:48.773462   12069 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:14:48.777349   12069 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:14:48.780389   12069 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	I1205 11:14:48.783363   12069 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:14:48.786708   12069 config.go:182] Loaded profile config "multinode-454000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:14:48.786786   12069 config.go:182] Loaded profile config "stopped-upgrade-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1205 11:14:48.786837   12069 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:14:48.791381   12069 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:14:48.798289   12069 start.go:297] selected driver: qemu2
	I1205 11:14:48.798295   12069 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:14:48.798300   12069 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:14:48.800656   12069 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:14:48.804309   12069 out.go:177] * Automatically selected the socket_vmnet network
	E1205 11:14:48.807403   12069 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1205 11:14:48.807414   12069 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:14:48.807430   12069 cni.go:84] Creating CNI manager for "bridge"
	I1205 11:14:48.807436   12069 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 11:14:48.807472   12069 start.go:340] cluster config:
	{Name:enable-default-cni-972000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:14:48.811720   12069 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:14:48.820349   12069 out.go:177] * Starting "enable-default-cni-972000" primary control-plane node in "enable-default-cni-972000" cluster
	I1205 11:14:48.824303   12069 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:14:48.824319   12069 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:14:48.824329   12069 cache.go:56] Caching tarball of preloaded images
	I1205 11:14:48.824399   12069 preload.go:172] Found /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:14:48.824405   12069 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:14:48.824498   12069 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/enable-default-cni-972000/config.json ...
	I1205 11:14:48.824508   12069 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/enable-default-cni-972000/config.json: {Name:mk706ac61bd6e0a0bf22f94bd048ed63057d6929 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:14:48.824970   12069 start.go:360] acquireMachinesLock for enable-default-cni-972000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:14:48.825015   12069 start.go:364] duration metric: took 36.167µs to acquireMachinesLock for "enable-default-cni-972000"
	I1205 11:14:48.825025   12069 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:14:48.825058   12069 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:14:48.833310   12069 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1205 11:14:48.847802   12069 start.go:159] libmachine.API.Create for "enable-default-cni-972000" (driver="qemu2")
	I1205 11:14:48.847824   12069 client.go:168] LocalClient.Create starting
	I1205 11:14:48.847899   12069 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 11:14:48.847936   12069 main.go:141] libmachine: Decoding PEM data...
	I1205 11:14:48.847953   12069 main.go:141] libmachine: Parsing certificate...
	I1205 11:14:48.847990   12069 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 11:14:48.848024   12069 main.go:141] libmachine: Decoding PEM data...
	I1205 11:14:48.848034   12069 main.go:141] libmachine: Parsing certificate...
	I1205 11:14:48.848538   12069 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:14:49.009045   12069 main.go:141] libmachine: Creating SSH key...
	I1205 11:14:49.169990   12069 main.go:141] libmachine: Creating Disk image...
	I1205 11:14:49.170000   12069 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:14:49.170241   12069 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/enable-default-cni-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/enable-default-cni-972000/disk.qcow2
	I1205 11:14:49.180365   12069 main.go:141] libmachine: STDOUT: 
	I1205 11:14:49.180383   12069 main.go:141] libmachine: STDERR: 
	I1205 11:14:49.180462   12069 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/enable-default-cni-972000/disk.qcow2 +20000M
	I1205 11:14:49.188919   12069 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:14:49.188936   12069 main.go:141] libmachine: STDERR: 
	I1205 11:14:49.188951   12069 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/enable-default-cni-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/enable-default-cni-972000/disk.qcow2
	I1205 11:14:49.188958   12069 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:14:49.188970   12069 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:14:49.189007   12069 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/enable-default-cni-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/enable-default-cni-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/enable-default-cni-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:9f:56:78:a0:59 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/enable-default-cni-972000/disk.qcow2
	I1205 11:14:49.190941   12069 main.go:141] libmachine: STDOUT: 
	I1205 11:14:49.190964   12069 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:14:49.190985   12069 client.go:171] duration metric: took 343.155958ms to LocalClient.Create
	I1205 11:14:51.193211   12069 start.go:128] duration metric: took 2.368110625s to createHost
	I1205 11:14:51.193294   12069 start.go:83] releasing machines lock for "enable-default-cni-972000", held for 2.368262958s
	W1205 11:14:51.193348   12069 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:14:51.204645   12069 out.go:177] * Deleting "enable-default-cni-972000" in qemu2 ...
	W1205 11:14:51.236122   12069 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:14:51.236148   12069 start.go:729] Will try again in 5 seconds ...
	I1205 11:14:56.238338   12069 start.go:360] acquireMachinesLock for enable-default-cni-972000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:14:56.238672   12069 start.go:364] duration metric: took 281.833µs to acquireMachinesLock for "enable-default-cni-972000"
	I1205 11:14:56.238712   12069 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:14:56.238854   12069 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:14:56.258237   12069 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1205 11:14:56.293347   12069 start.go:159] libmachine.API.Create for "enable-default-cni-972000" (driver="qemu2")
	I1205 11:14:56.293391   12069 client.go:168] LocalClient.Create starting
	I1205 11:14:56.293507   12069 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 11:14:56.293576   12069 main.go:141] libmachine: Decoding PEM data...
	I1205 11:14:56.293591   12069 main.go:141] libmachine: Parsing certificate...
	I1205 11:14:56.293649   12069 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 11:14:56.293700   12069 main.go:141] libmachine: Decoding PEM data...
	I1205 11:14:56.293714   12069 main.go:141] libmachine: Parsing certificate...
	I1205 11:14:56.294390   12069 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:14:56.461609   12069 main.go:141] libmachine: Creating SSH key...
	I1205 11:14:56.578312   12069 main.go:141] libmachine: Creating Disk image...
	I1205 11:14:56.578321   12069 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:14:56.578545   12069 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/enable-default-cni-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/enable-default-cni-972000/disk.qcow2
	I1205 11:14:56.588544   12069 main.go:141] libmachine: STDOUT: 
	I1205 11:14:56.588563   12069 main.go:141] libmachine: STDERR: 
	I1205 11:14:56.588626   12069 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/enable-default-cni-972000/disk.qcow2 +20000M
	I1205 11:14:56.597143   12069 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:14:56.597167   12069 main.go:141] libmachine: STDERR: 
	I1205 11:14:56.597186   12069 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/enable-default-cni-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/enable-default-cni-972000/disk.qcow2
	I1205 11:14:56.597194   12069 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:14:56.597206   12069 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:14:56.597239   12069 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/enable-default-cni-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/enable-default-cni-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/enable-default-cni-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:a4:4e:fe:ef:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/enable-default-cni-972000/disk.qcow2
	I1205 11:14:56.599185   12069 main.go:141] libmachine: STDOUT: 
	I1205 11:14:56.599199   12069 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:14:56.599216   12069 client.go:171] duration metric: took 305.819916ms to LocalClient.Create
	I1205 11:14:58.601372   12069 start.go:128] duration metric: took 2.362486042s to createHost
	I1205 11:14:58.601422   12069 start.go:83] releasing machines lock for "enable-default-cni-972000", held for 2.362728417s
	W1205 11:14:58.601755   12069 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:14:58.611627   12069 out.go:201] 
	W1205 11:14:58.615693   12069 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:14:58.615712   12069 out.go:270] * 
	* 
	W1205 11:14:58.617221   12069 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:14:58.631446   12069 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-972000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-972000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.902358125s)

                                                
                                                
-- stdout --
	* [flannel-972000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-972000" primary control-plane node in "flannel-972000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-972000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:15:00.949027   12178 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:15:00.949177   12178 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:15:00.949183   12178 out.go:358] Setting ErrFile to fd 2...
	I1205 11:15:00.949186   12178 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:15:00.949288   12178 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:15:00.950485   12178 out.go:352] Setting JSON to false
	I1205 11:15:00.968776   12178 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6272,"bootTime":1733419828,"procs":544,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 11:15:00.968851   12178 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:15:00.972261   12178 out.go:177] * [flannel-972000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:15:00.980182   12178 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 11:15:00.980257   12178 notify.go:220] Checking for updates...
	I1205 11:15:00.988138   12178 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 11:15:00.991135   12178 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:15:00.995145   12178 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:15:00.998162   12178 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	I1205 11:15:01.001148   12178 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:15:01.004464   12178 config.go:182] Loaded profile config "multinode-454000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:15:01.004536   12178 config.go:182] Loaded profile config "stopped-upgrade-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1205 11:15:01.004584   12178 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:15:01.008100   12178 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:15:01.015081   12178 start.go:297] selected driver: qemu2
	I1205 11:15:01.015087   12178 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:15:01.015097   12178 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:15:01.017568   12178 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:15:01.021117   12178 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:15:01.024245   12178 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:15:01.024269   12178 cni.go:84] Creating CNI manager for "flannel"
	I1205 11:15:01.024273   12178 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1205 11:15:01.024318   12178 start.go:340] cluster config:
	{Name:flannel-972000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:flannel-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:15:01.028894   12178 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:15:01.037131   12178 out.go:177] * Starting "flannel-972000" primary control-plane node in "flannel-972000" cluster
	I1205 11:15:01.041108   12178 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:15:01.041124   12178 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:15:01.041137   12178 cache.go:56] Caching tarball of preloaded images
	I1205 11:15:01.041224   12178 preload.go:172] Found /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:15:01.041230   12178 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:15:01.041294   12178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/flannel-972000/config.json ...
	I1205 11:15:01.041305   12178 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/flannel-972000/config.json: {Name:mkcb296805238cda21521ab919ca5df11b245514 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:15:01.041791   12178 start.go:360] acquireMachinesLock for flannel-972000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:15:01.041836   12178 start.go:364] duration metric: took 39.125µs to acquireMachinesLock for "flannel-972000"
	I1205 11:15:01.041847   12178 start.go:93] Provisioning new machine with config: &{Name:flannel-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:flannel-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:15:01.041870   12178 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:15:01.049112   12178 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1205 11:15:01.064591   12178 start.go:159] libmachine.API.Create for "flannel-972000" (driver="qemu2")
	I1205 11:15:01.064624   12178 client.go:168] LocalClient.Create starting
	I1205 11:15:01.064694   12178 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 11:15:01.064732   12178 main.go:141] libmachine: Decoding PEM data...
	I1205 11:15:01.064743   12178 main.go:141] libmachine: Parsing certificate...
	I1205 11:15:01.064789   12178 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 11:15:01.064819   12178 main.go:141] libmachine: Decoding PEM data...
	I1205 11:15:01.064827   12178 main.go:141] libmachine: Parsing certificate...
	I1205 11:15:01.065288   12178 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:15:01.224351   12178 main.go:141] libmachine: Creating SSH key...
	I1205 11:15:01.316743   12178 main.go:141] libmachine: Creating Disk image...
	I1205 11:15:01.316751   12178 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:15:01.316969   12178 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/flannel-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/flannel-972000/disk.qcow2
	I1205 11:15:01.327066   12178 main.go:141] libmachine: STDOUT: 
	I1205 11:15:01.327083   12178 main.go:141] libmachine: STDERR: 
	I1205 11:15:01.327150   12178 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/flannel-972000/disk.qcow2 +20000M
	I1205 11:15:01.335779   12178 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:15:01.335802   12178 main.go:141] libmachine: STDERR: 
	I1205 11:15:01.335821   12178 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/flannel-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/flannel-972000/disk.qcow2
	I1205 11:15:01.335826   12178 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:15:01.335836   12178 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:15:01.335866   12178 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/flannel-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/flannel-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/flannel-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:2a:f0:2b:d9:e0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/flannel-972000/disk.qcow2
	I1205 11:15:01.337772   12178 main.go:141] libmachine: STDOUT: 
	I1205 11:15:01.337784   12178 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:15:01.337802   12178 client.go:171] duration metric: took 273.169291ms to LocalClient.Create
	I1205 11:15:03.339966   12178 start.go:128] duration metric: took 2.298072875s to createHost
	I1205 11:15:03.340059   12178 start.go:83] releasing machines lock for "flannel-972000", held for 2.298207791s
	W1205 11:15:03.340128   12178 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:15:03.356256   12178 out.go:177] * Deleting "flannel-972000" in qemu2 ...
	W1205 11:15:03.377107   12178 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:15:03.377137   12178 start.go:729] Will try again in 5 seconds ...
	I1205 11:15:08.379544   12178 start.go:360] acquireMachinesLock for flannel-972000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:15:08.380268   12178 start.go:364] duration metric: took 602.875µs to acquireMachinesLock for "flannel-972000"
	I1205 11:15:08.380360   12178 start.go:93] Provisioning new machine with config: &{Name:flannel-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:flannel-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:15:08.380615   12178 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:15:08.392265   12178 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1205 11:15:08.441056   12178 start.go:159] libmachine.API.Create for "flannel-972000" (driver="qemu2")
	I1205 11:15:08.441115   12178 client.go:168] LocalClient.Create starting
	I1205 11:15:08.441274   12178 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 11:15:08.441347   12178 main.go:141] libmachine: Decoding PEM data...
	I1205 11:15:08.441363   12178 main.go:141] libmachine: Parsing certificate...
	I1205 11:15:08.441439   12178 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 11:15:08.441498   12178 main.go:141] libmachine: Decoding PEM data...
	I1205 11:15:08.441512   12178 main.go:141] libmachine: Parsing certificate...
	I1205 11:15:08.442236   12178 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:15:08.611235   12178 main.go:141] libmachine: Creating SSH key...
	I1205 11:15:08.746920   12178 main.go:141] libmachine: Creating Disk image...
	I1205 11:15:08.746928   12178 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:15:08.747137   12178 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/flannel-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/flannel-972000/disk.qcow2
	I1205 11:15:08.757179   12178 main.go:141] libmachine: STDOUT: 
	I1205 11:15:08.757194   12178 main.go:141] libmachine: STDERR: 
	I1205 11:15:08.757263   12178 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/flannel-972000/disk.qcow2 +20000M
	I1205 11:15:08.766253   12178 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:15:08.766270   12178 main.go:141] libmachine: STDERR: 
	I1205 11:15:08.766293   12178 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/flannel-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/flannel-972000/disk.qcow2
	I1205 11:15:08.766299   12178 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:15:08.766309   12178 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:15:08.766334   12178 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/flannel-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/flannel-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/flannel-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:8d:4d:bc:25:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/flannel-972000/disk.qcow2
	I1205 11:15:08.768338   12178 main.go:141] libmachine: STDOUT: 
	I1205 11:15:08.768352   12178 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:15:08.768366   12178 client.go:171] duration metric: took 327.242542ms to LocalClient.Create
	I1205 11:15:10.770589   12178 start.go:128] duration metric: took 2.389925291s to createHost
	I1205 11:15:10.770700   12178 start.go:83] releasing machines lock for "flannel-972000", held for 2.390373958s
	W1205 11:15:10.771087   12178 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:15:10.786755   12178 out.go:201] 
	W1205 11:15:10.790843   12178 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:15:10.790880   12178 out.go:270] * 
	* 
	W1205 11:15:10.793103   12178 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:15:10.805738   12178 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-972000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-972000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.839931708s)

                                                
                                                
-- stdout --
	* [bridge-972000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-972000" primary control-plane node in "bridge-972000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-972000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:15:13.354960   12297 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:15:13.355125   12297 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:15:13.355131   12297 out.go:358] Setting ErrFile to fd 2...
	I1205 11:15:13.355133   12297 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:15:13.355286   12297 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:15:13.356397   12297 out.go:352] Setting JSON to false
	I1205 11:15:13.374551   12297 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6285,"bootTime":1733419828,"procs":544,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 11:15:13.374624   12297 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:15:13.381396   12297 out.go:177] * [bridge-972000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:15:13.389601   12297 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 11:15:13.389680   12297 notify.go:220] Checking for updates...
	I1205 11:15:13.395508   12297 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 11:15:13.398584   12297 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:15:13.401505   12297 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:15:13.404546   12297 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	I1205 11:15:13.407535   12297 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:15:13.410810   12297 config.go:182] Loaded profile config "multinode-454000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:15:13.410888   12297 config.go:182] Loaded profile config "stopped-upgrade-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1205 11:15:13.410938   12297 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:15:13.414512   12297 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:15:13.421559   12297 start.go:297] selected driver: qemu2
	I1205 11:15:13.421566   12297 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:15:13.421577   12297 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:15:13.423951   12297 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:15:13.426549   12297 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:15:13.430671   12297 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:15:13.430689   12297 cni.go:84] Creating CNI manager for "bridge"
	I1205 11:15:13.430692   12297 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 11:15:13.430730   12297 start.go:340] cluster config:
	{Name:bridge-972000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:bridge-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:15:13.434949   12297 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:15:13.442515   12297 out.go:177] * Starting "bridge-972000" primary control-plane node in "bridge-972000" cluster
	I1205 11:15:13.446539   12297 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:15:13.446556   12297 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:15:13.446566   12297 cache.go:56] Caching tarball of preloaded images
	I1205 11:15:13.446646   12297 preload.go:172] Found /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:15:13.446651   12297 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:15:13.446708   12297 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/bridge-972000/config.json ...
	I1205 11:15:13.446718   12297 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/bridge-972000/config.json: {Name:mk57c2192c5accfbc772a1c518cae9f6ffa8da0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:15:13.447054   12297 start.go:360] acquireMachinesLock for bridge-972000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:15:13.447096   12297 start.go:364] duration metric: took 36.459µs to acquireMachinesLock for "bridge-972000"
	I1205 11:15:13.447106   12297 start.go:93] Provisioning new machine with config: &{Name:bridge-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:bridge-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:15:13.447142   12297 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:15:13.455541   12297 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1205 11:15:13.469877   12297 start.go:159] libmachine.API.Create for "bridge-972000" (driver="qemu2")
	I1205 11:15:13.469912   12297 client.go:168] LocalClient.Create starting
	I1205 11:15:13.469983   12297 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 11:15:13.470020   12297 main.go:141] libmachine: Decoding PEM data...
	I1205 11:15:13.470030   12297 main.go:141] libmachine: Parsing certificate...
	I1205 11:15:13.470066   12297 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 11:15:13.470096   12297 main.go:141] libmachine: Decoding PEM data...
	I1205 11:15:13.470110   12297 main.go:141] libmachine: Parsing certificate...
	I1205 11:15:13.470484   12297 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:15:13.628999   12297 main.go:141] libmachine: Creating SSH key...
	I1205 11:15:13.771322   12297 main.go:141] libmachine: Creating Disk image...
	I1205 11:15:13.771332   12297 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:15:13.771558   12297 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/bridge-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/bridge-972000/disk.qcow2
	I1205 11:15:13.781518   12297 main.go:141] libmachine: STDOUT: 
	I1205 11:15:13.781550   12297 main.go:141] libmachine: STDERR: 
	I1205 11:15:13.781616   12297 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/bridge-972000/disk.qcow2 +20000M
	I1205 11:15:13.790267   12297 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:15:13.790282   12297 main.go:141] libmachine: STDERR: 
	I1205 11:15:13.790296   12297 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/bridge-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/bridge-972000/disk.qcow2
	I1205 11:15:13.790300   12297 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:15:13.790312   12297 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:15:13.790352   12297 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/bridge-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/bridge-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/bridge-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:8b:08:ba:2d:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/bridge-972000/disk.qcow2
	I1205 11:15:13.792347   12297 main.go:141] libmachine: STDOUT: 
	I1205 11:15:13.792361   12297 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:15:13.792383   12297 client.go:171] duration metric: took 322.464292ms to LocalClient.Create
	I1205 11:15:15.794474   12297 start.go:128] duration metric: took 2.347318208s to createHost
	I1205 11:15:15.794496   12297 start.go:83] releasing machines lock for "bridge-972000", held for 2.347388875s
	W1205 11:15:15.794522   12297 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:15:15.809218   12297 out.go:177] * Deleting "bridge-972000" in qemu2 ...
	W1205 11:15:15.820731   12297 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:15:15.820742   12297 start.go:729] Will try again in 5 seconds ...
	I1205 11:15:20.822935   12297 start.go:360] acquireMachinesLock for bridge-972000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:15:20.823266   12297 start.go:364] duration metric: took 270µs to acquireMachinesLock for "bridge-972000"
	I1205 11:15:20.823305   12297 start.go:93] Provisioning new machine with config: &{Name:bridge-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:bridge-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:15:20.823423   12297 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:15:20.834144   12297 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1205 11:15:20.867216   12297 start.go:159] libmachine.API.Create for "bridge-972000" (driver="qemu2")
	I1205 11:15:20.867263   12297 client.go:168] LocalClient.Create starting
	I1205 11:15:20.867384   12297 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 11:15:20.867442   12297 main.go:141] libmachine: Decoding PEM data...
	I1205 11:15:20.867454   12297 main.go:141] libmachine: Parsing certificate...
	I1205 11:15:20.867500   12297 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 11:15:20.867544   12297 main.go:141] libmachine: Decoding PEM data...
	I1205 11:15:20.867553   12297 main.go:141] libmachine: Parsing certificate...
	I1205 11:15:20.867971   12297 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:15:21.031480   12297 main.go:141] libmachine: Creating SSH key...
	I1205 11:15:21.097253   12297 main.go:141] libmachine: Creating Disk image...
	I1205 11:15:21.097260   12297 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:15:21.097486   12297 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/bridge-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/bridge-972000/disk.qcow2
	I1205 11:15:21.107764   12297 main.go:141] libmachine: STDOUT: 
	I1205 11:15:21.107798   12297 main.go:141] libmachine: STDERR: 
	I1205 11:15:21.107888   12297 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/bridge-972000/disk.qcow2 +20000M
	I1205 11:15:21.116535   12297 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:15:21.116560   12297 main.go:141] libmachine: STDERR: 
	I1205 11:15:21.116574   12297 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/bridge-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/bridge-972000/disk.qcow2
	I1205 11:15:21.116579   12297 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:15:21.116587   12297 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:15:21.116613   12297 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/bridge-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/bridge-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/bridge-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:22:d0:94:38:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/bridge-972000/disk.qcow2
	I1205 11:15:21.118486   12297 main.go:141] libmachine: STDOUT: 
	I1205 11:15:21.118507   12297 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:15:21.118519   12297 client.go:171] duration metric: took 251.251042ms to LocalClient.Create
	I1205 11:15:23.120850   12297 start.go:128] duration metric: took 2.297375334s to createHost
	I1205 11:15:23.120924   12297 start.go:83] releasing machines lock for "bridge-972000", held for 2.297636666s
	W1205 11:15:23.121262   12297 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:15:23.135867   12297 out.go:201] 
	W1205 11:15:23.139948   12297 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:15:23.139991   12297 out.go:270] * 
	* 
	W1205 11:15:23.142418   12297 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:15:23.150832   12297 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-972000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-972000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.821700458s)

                                                
                                                
-- stdout --
	* [kubenet-972000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-972000" primary control-plane node in "kubenet-972000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-972000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:15:25.526740   12411 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:15:25.526906   12411 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:15:25.526909   12411 out.go:358] Setting ErrFile to fd 2...
	I1205 11:15:25.526912   12411 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:15:25.527058   12411 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:15:25.528204   12411 out.go:352] Setting JSON to false
	I1205 11:15:25.546050   12411 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6297,"bootTime":1733419828,"procs":548,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 11:15:25.546136   12411 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:15:25.551841   12411 out.go:177] * [kubenet-972000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:15:25.559851   12411 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 11:15:25.559885   12411 notify.go:220] Checking for updates...
	I1205 11:15:25.567794   12411 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 11:15:25.569366   12411 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:15:25.572805   12411 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:15:25.575801   12411 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	I1205 11:15:25.578827   12411 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:15:25.582222   12411 config.go:182] Loaded profile config "multinode-454000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:15:25.582301   12411 config.go:182] Loaded profile config "stopped-upgrade-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1205 11:15:25.582349   12411 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:15:25.585779   12411 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:15:25.592780   12411 start.go:297] selected driver: qemu2
	I1205 11:15:25.592789   12411 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:15:25.592799   12411 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:15:25.595249   12411 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:15:25.598817   12411 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:15:25.601920   12411 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:15:25.601942   12411 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1205 11:15:25.601985   12411 start.go:340] cluster config:
	{Name:kubenet-972000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kubenet-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:15:25.606580   12411 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:15:25.614835   12411 out.go:177] * Starting "kubenet-972000" primary control-plane node in "kubenet-972000" cluster
	I1205 11:15:25.618716   12411 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:15:25.618728   12411 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:15:25.618737   12411 cache.go:56] Caching tarball of preloaded images
	I1205 11:15:25.618797   12411 preload.go:172] Found /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:15:25.618801   12411 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:15:25.618849   12411 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/kubenet-972000/config.json ...
	I1205 11:15:25.618859   12411 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/kubenet-972000/config.json: {Name:mk3aeca48d5da561ee6e7fa555467af90d25e320 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:15:25.619199   12411 start.go:360] acquireMachinesLock for kubenet-972000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:15:25.619241   12411 start.go:364] duration metric: took 36.792µs to acquireMachinesLock for "kubenet-972000"
	I1205 11:15:25.619251   12411 start.go:93] Provisioning new machine with config: &{Name:kubenet-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kubenet-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:15:25.619275   12411 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:15:25.622882   12411 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1205 11:15:25.637439   12411 start.go:159] libmachine.API.Create for "kubenet-972000" (driver="qemu2")
	I1205 11:15:25.637469   12411 client.go:168] LocalClient.Create starting
	I1205 11:15:25.637544   12411 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 11:15:25.637583   12411 main.go:141] libmachine: Decoding PEM data...
	I1205 11:15:25.637593   12411 main.go:141] libmachine: Parsing certificate...
	I1205 11:15:25.637630   12411 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 11:15:25.637660   12411 main.go:141] libmachine: Decoding PEM data...
	I1205 11:15:25.637670   12411 main.go:141] libmachine: Parsing certificate...
	I1205 11:15:25.638085   12411 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:15:25.796714   12411 main.go:141] libmachine: Creating SSH key...
	I1205 11:15:25.883978   12411 main.go:141] libmachine: Creating Disk image...
	I1205 11:15:25.883986   12411 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:15:25.884467   12411 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubenet-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubenet-972000/disk.qcow2
	I1205 11:15:25.894506   12411 main.go:141] libmachine: STDOUT: 
	I1205 11:15:25.894521   12411 main.go:141] libmachine: STDERR: 
	I1205 11:15:25.894579   12411 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubenet-972000/disk.qcow2 +20000M
	I1205 11:15:25.903589   12411 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:15:25.903603   12411 main.go:141] libmachine: STDERR: 
	I1205 11:15:25.903628   12411 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubenet-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubenet-972000/disk.qcow2
	I1205 11:15:25.903632   12411 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:15:25.903646   12411 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:15:25.903679   12411 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubenet-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubenet-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubenet-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:5f:1a:1c:0a:35 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubenet-972000/disk.qcow2
	I1205 11:15:25.905665   12411 main.go:141] libmachine: STDOUT: 
	I1205 11:15:25.905677   12411 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:15:25.905693   12411 client.go:171] duration metric: took 268.218ms to LocalClient.Create
	I1205 11:15:27.907779   12411 start.go:128] duration metric: took 2.288488875s to createHost
	I1205 11:15:27.907811   12411 start.go:83] releasing machines lock for "kubenet-972000", held for 2.288560209s
	W1205 11:15:27.907822   12411 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:15:27.918302   12411 out.go:177] * Deleting "kubenet-972000" in qemu2 ...
	W1205 11:15:27.928429   12411 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:15:27.928436   12411 start.go:729] Will try again in 5 seconds ...
	I1205 11:15:32.930524   12411 start.go:360] acquireMachinesLock for kubenet-972000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:15:32.930660   12411 start.go:364] duration metric: took 112.458µs to acquireMachinesLock for "kubenet-972000"
	I1205 11:15:32.930689   12411 start.go:93] Provisioning new machine with config: &{Name:kubenet-972000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kubenet-972000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:15:32.930746   12411 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:15:32.940563   12411 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1205 11:15:32.955598   12411 start.go:159] libmachine.API.Create for "kubenet-972000" (driver="qemu2")
	I1205 11:15:32.955623   12411 client.go:168] LocalClient.Create starting
	I1205 11:15:32.955691   12411 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 11:15:32.955742   12411 main.go:141] libmachine: Decoding PEM data...
	I1205 11:15:32.955755   12411 main.go:141] libmachine: Parsing certificate...
	I1205 11:15:32.955797   12411 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 11:15:32.955825   12411 main.go:141] libmachine: Decoding PEM data...
	I1205 11:15:32.955833   12411 main.go:141] libmachine: Parsing certificate...
	I1205 11:15:32.956199   12411 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:15:33.114489   12411 main.go:141] libmachine: Creating SSH key...
	I1205 11:15:33.244080   12411 main.go:141] libmachine: Creating Disk image...
	I1205 11:15:33.244088   12411 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:15:33.244305   12411 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubenet-972000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubenet-972000/disk.qcow2
	I1205 11:15:33.254531   12411 main.go:141] libmachine: STDOUT: 
	I1205 11:15:33.254555   12411 main.go:141] libmachine: STDERR: 
	I1205 11:15:33.254621   12411 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubenet-972000/disk.qcow2 +20000M
	I1205 11:15:33.263196   12411 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:15:33.263211   12411 main.go:141] libmachine: STDERR: 
	I1205 11:15:33.263227   12411 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubenet-972000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubenet-972000/disk.qcow2
	I1205 11:15:33.263232   12411 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:15:33.263255   12411 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:15:33.263291   12411 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubenet-972000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubenet-972000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubenet-972000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:5d:4e:71:40:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/kubenet-972000/disk.qcow2
	I1205 11:15:33.265142   12411 main.go:141] libmachine: STDOUT: 
	I1205 11:15:33.265157   12411 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:15:33.265171   12411 client.go:171] duration metric: took 309.543542ms to LocalClient.Create
	I1205 11:15:35.267385   12411 start.go:128] duration metric: took 2.336601792s to createHost
	I1205 11:15:35.267506   12411 start.go:83] releasing machines lock for "kubenet-972000", held for 2.336829625s
	W1205 11:15:35.268040   12411 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-972000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:15:35.282674   12411 out.go:201] 
	W1205 11:15:35.285828   12411 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:15:35.285897   12411 out.go:270] * 
	* 
	W1205 11:15:35.288329   12411 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:15:35.298675   12411 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-811000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-811000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.920713375s)

                                                
                                                
-- stdout --
	* [old-k8s-version-811000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-811000" primary control-plane node in "old-k8s-version-811000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-811000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:15:37.687351   12524 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:15:37.687519   12524 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:15:37.687522   12524 out.go:358] Setting ErrFile to fd 2...
	I1205 11:15:37.687524   12524 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:15:37.687665   12524 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:15:37.688898   12524 out.go:352] Setting JSON to false
	I1205 11:15:37.707145   12524 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6309,"bootTime":1733419828,"procs":543,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 11:15:37.707207   12524 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:15:37.712830   12524 out.go:177] * [old-k8s-version-811000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:15:37.720799   12524 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 11:15:37.720871   12524 notify.go:220] Checking for updates...
	I1205 11:15:37.728769   12524 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 11:15:37.731729   12524 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:15:37.735747   12524 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:15:37.738794   12524 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	I1205 11:15:37.741746   12524 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:15:37.745176   12524 config.go:182] Loaded profile config "multinode-454000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:15:37.745264   12524 config.go:182] Loaded profile config "stopped-upgrade-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1205 11:15:37.745317   12524 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:15:37.749743   12524 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:15:37.756736   12524 start.go:297] selected driver: qemu2
	I1205 11:15:37.756744   12524 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:15:37.756758   12524 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:15:37.759380   12524 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:15:37.763733   12524 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:15:37.766765   12524 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:15:37.766781   12524 cni.go:84] Creating CNI manager for ""
	I1205 11:15:37.766800   12524 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1205 11:15:37.766844   12524 start.go:340] cluster config:
	{Name:old-k8s-version-811000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:15:37.771132   12524 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:15:37.779737   12524 out.go:177] * Starting "old-k8s-version-811000" primary control-plane node in "old-k8s-version-811000" cluster
	I1205 11:15:37.783766   12524 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1205 11:15:37.783778   12524 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1205 11:15:37.783787   12524 cache.go:56] Caching tarball of preloaded images
	I1205 11:15:37.783854   12524 preload.go:172] Found /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:15:37.783859   12524 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1205 11:15:37.783906   12524 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/old-k8s-version-811000/config.json ...
	I1205 11:15:37.783916   12524 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/old-k8s-version-811000/config.json: {Name:mk547a716dc9aa039b92fd05ddac2eb7d6e8e2bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:15:37.784373   12524 start.go:360] acquireMachinesLock for old-k8s-version-811000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:15:37.784422   12524 start.go:364] duration metric: took 40.459µs to acquireMachinesLock for "old-k8s-version-811000"
	I1205 11:15:37.784433   12524 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:15:37.784459   12524 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:15:37.788612   12524 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 11:15:37.803334   12524 start.go:159] libmachine.API.Create for "old-k8s-version-811000" (driver="qemu2")
	I1205 11:15:37.803360   12524 client.go:168] LocalClient.Create starting
	I1205 11:15:37.803429   12524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 11:15:37.803465   12524 main.go:141] libmachine: Decoding PEM data...
	I1205 11:15:37.803475   12524 main.go:141] libmachine: Parsing certificate...
	I1205 11:15:37.803516   12524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 11:15:37.803545   12524 main.go:141] libmachine: Decoding PEM data...
	I1205 11:15:37.803553   12524 main.go:141] libmachine: Parsing certificate...
	I1205 11:15:37.803918   12524 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:15:37.986826   12524 main.go:141] libmachine: Creating SSH key...
	I1205 11:15:38.090256   12524 main.go:141] libmachine: Creating Disk image...
	I1205 11:15:38.090271   12524 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:15:38.090471   12524 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/old-k8s-version-811000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/old-k8s-version-811000/disk.qcow2
	I1205 11:15:38.100419   12524 main.go:141] libmachine: STDOUT: 
	I1205 11:15:38.100436   12524 main.go:141] libmachine: STDERR: 
	I1205 11:15:38.100492   12524 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/old-k8s-version-811000/disk.qcow2 +20000M
	I1205 11:15:38.109097   12524 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:15:38.109115   12524 main.go:141] libmachine: STDERR: 
	I1205 11:15:38.109130   12524 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/old-k8s-version-811000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/old-k8s-version-811000/disk.qcow2
	I1205 11:15:38.109144   12524 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:15:38.109154   12524 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:15:38.109190   12524 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/old-k8s-version-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/old-k8s-version-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/old-k8s-version-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:f4:e8:9e:90:e5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/old-k8s-version-811000/disk.qcow2
	I1205 11:15:38.111134   12524 main.go:141] libmachine: STDOUT: 
	I1205 11:15:38.111146   12524 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:15:38.111166   12524 client.go:171] duration metric: took 307.7985ms to LocalClient.Create
	I1205 11:15:40.112889   12524 start.go:128] duration metric: took 2.328410958s to createHost
	I1205 11:15:40.112926   12524 start.go:83] releasing machines lock for "old-k8s-version-811000", held for 2.328492s
	W1205 11:15:40.112960   12524 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:15:40.127983   12524 out.go:177] * Deleting "old-k8s-version-811000" in qemu2 ...
	W1205 11:15:40.147285   12524 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:15:40.147413   12524 start.go:729] Will try again in 5 seconds ...
	I1205 11:15:45.149626   12524 start.go:360] acquireMachinesLock for old-k8s-version-811000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:15:45.150231   12524 start.go:364] duration metric: took 523.416µs to acquireMachinesLock for "old-k8s-version-811000"
	I1205 11:15:45.150362   12524 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:15:45.150661   12524 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:15:45.159291   12524 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 11:15:45.198042   12524 start.go:159] libmachine.API.Create for "old-k8s-version-811000" (driver="qemu2")
	I1205 11:15:45.198093   12524 client.go:168] LocalClient.Create starting
	I1205 11:15:45.198260   12524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 11:15:45.198348   12524 main.go:141] libmachine: Decoding PEM data...
	I1205 11:15:45.198364   12524 main.go:141] libmachine: Parsing certificate...
	I1205 11:15:45.198441   12524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 11:15:45.198493   12524 main.go:141] libmachine: Decoding PEM data...
	I1205 11:15:45.198503   12524 main.go:141] libmachine: Parsing certificate...
	I1205 11:15:45.199057   12524 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:15:45.365546   12524 main.go:141] libmachine: Creating SSH key...
	I1205 11:15:45.505320   12524 main.go:141] libmachine: Creating Disk image...
	I1205 11:15:45.505332   12524 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:15:45.505580   12524 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/old-k8s-version-811000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/old-k8s-version-811000/disk.qcow2
	I1205 11:15:45.515880   12524 main.go:141] libmachine: STDOUT: 
	I1205 11:15:45.515895   12524 main.go:141] libmachine: STDERR: 
	I1205 11:15:45.515964   12524 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/old-k8s-version-811000/disk.qcow2 +20000M
	I1205 11:15:45.524754   12524 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:15:45.524769   12524 main.go:141] libmachine: STDERR: 
	I1205 11:15:45.524780   12524 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/old-k8s-version-811000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/old-k8s-version-811000/disk.qcow2
	I1205 11:15:45.524785   12524 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:15:45.524795   12524 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:15:45.524839   12524 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/old-k8s-version-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/old-k8s-version-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/old-k8s-version-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:b1:e9:95:80:2a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/old-k8s-version-811000/disk.qcow2
	I1205 11:15:45.526796   12524 main.go:141] libmachine: STDOUT: 
	I1205 11:15:45.526810   12524 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:15:45.526821   12524 client.go:171] duration metric: took 328.718416ms to LocalClient.Create
	I1205 11:15:47.529037   12524 start.go:128] duration metric: took 2.378329458s to createHost
	I1205 11:15:47.529117   12524 start.go:83] releasing machines lock for "old-k8s-version-811000", held for 2.3788565s
	W1205 11:15:47.529593   12524 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:15:47.544250   12524 out.go:201] 
	W1205 11:15:47.547334   12524 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:15:47.547363   12524 out.go:270] * 
	* 
	W1205 11:15:47.549943   12524 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:15:47.562277   12524 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-811000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-811000 -n old-k8s-version-811000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-811000 -n old-k8s-version-811000: exit status 7 (72.261125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-811000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-811000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-811000 create -f testdata/busybox.yaml: exit status 1 (30.439ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-811000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-811000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-811000 -n old-k8s-version-811000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-811000 -n old-k8s-version-811000: exit status 7 (34.373916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-811000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-811000 -n old-k8s-version-811000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-811000 -n old-k8s-version-811000: exit status 7 (33.002208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-811000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-811000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-811000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-811000 describe deploy/metrics-server -n kube-system: exit status 1 (27.37425ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-811000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-811000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-811000 -n old-k8s-version-811000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-811000 -n old-k8s-version-811000: exit status 7 (34.018042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-811000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-811000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-811000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.207455917s)

                                                
                                                
-- stdout --
	* [old-k8s-version-811000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-811000" primary control-plane node in "old-k8s-version-811000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-811000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-811000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:15:51.060851   12581 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:15:51.061016   12581 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:15:51.061020   12581 out.go:358] Setting ErrFile to fd 2...
	I1205 11:15:51.061023   12581 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:15:51.061132   12581 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:15:51.062193   12581 out.go:352] Setting JSON to false
	I1205 11:15:51.080254   12581 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6323,"bootTime":1733419828,"procs":546,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 11:15:51.080318   12581 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:15:51.085457   12581 out.go:177] * [old-k8s-version-811000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:15:51.092392   12581 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 11:15:51.092429   12581 notify.go:220] Checking for updates...
	I1205 11:15:51.100435   12581 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 11:15:51.104342   12581 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:15:51.107438   12581 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:15:51.114369   12581 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	I1205 11:15:51.123415   12581 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:15:51.127775   12581 config.go:182] Loaded profile config "old-k8s-version-811000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1205 11:15:51.132426   12581 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1205 11:15:51.136407   12581 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:15:51.141377   12581 out.go:177] * Using the qemu2 driver based on existing profile
	I1205 11:15:51.149385   12581 start.go:297] selected driver: qemu2
	I1205 11:15:51.149395   12581 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:15:51.149464   12581 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:15:51.152298   12581 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:15:51.152331   12581 cni.go:84] Creating CNI manager for ""
	I1205 11:15:51.152351   12581 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1205 11:15:51.152378   12581 start.go:340] cluster config:
	{Name:old-k8s-version-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-811000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:15:51.157057   12581 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:15:51.164400   12581 out.go:177] * Starting "old-k8s-version-811000" primary control-plane node in "old-k8s-version-811000" cluster
	I1205 11:15:51.168448   12581 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1205 11:15:51.168481   12581 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1205 11:15:51.168495   12581 cache.go:56] Caching tarball of preloaded images
	I1205 11:15:51.168612   12581 preload.go:172] Found /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:15:51.168619   12581 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1205 11:15:51.168692   12581 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/old-k8s-version-811000/config.json ...
	I1205 11:15:51.169190   12581 start.go:360] acquireMachinesLock for old-k8s-version-811000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:15:51.169224   12581 start.go:364] duration metric: took 26.042µs to acquireMachinesLock for "old-k8s-version-811000"
	I1205 11:15:51.169232   12581 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:15:51.169238   12581 fix.go:54] fixHost starting: 
	I1205 11:15:51.169368   12581 fix.go:112] recreateIfNeeded on old-k8s-version-811000: state=Stopped err=<nil>
	W1205 11:15:51.169379   12581 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:15:51.173377   12581 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-811000" ...
	I1205 11:15:51.181420   12581 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:15:51.181459   12581 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/old-k8s-version-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/old-k8s-version-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/old-k8s-version-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:b1:e9:95:80:2a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/old-k8s-version-811000/disk.qcow2
	I1205 11:15:51.183704   12581 main.go:141] libmachine: STDOUT: 
	I1205 11:15:51.183719   12581 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:15:51.183748   12581 fix.go:56] duration metric: took 14.510292ms for fixHost
	I1205 11:15:51.183754   12581 start.go:83] releasing machines lock for "old-k8s-version-811000", held for 14.525958ms
	W1205 11:15:51.183759   12581 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:15:51.183808   12581 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:15:51.183812   12581 start.go:729] Will try again in 5 seconds ...
	I1205 11:15:56.184815   12581 start.go:360] acquireMachinesLock for old-k8s-version-811000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:15:56.185141   12581 start.go:364] duration metric: took 266.583µs to acquireMachinesLock for "old-k8s-version-811000"
	I1205 11:15:56.185187   12581 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:15:56.185195   12581 fix.go:54] fixHost starting: 
	I1205 11:15:56.185638   12581 fix.go:112] recreateIfNeeded on old-k8s-version-811000: state=Stopped err=<nil>
	W1205 11:15:56.185652   12581 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:15:56.194928   12581 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-811000" ...
	I1205 11:15:56.199036   12581 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:15:56.199171   12581 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/old-k8s-version-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/old-k8s-version-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/old-k8s-version-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:b1:e9:95:80:2a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/old-k8s-version-811000/disk.qcow2
	I1205 11:15:56.206237   12581 main.go:141] libmachine: STDOUT: 
	I1205 11:15:56.206291   12581 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:15:56.206354   12581 fix.go:56] duration metric: took 21.157042ms for fixHost
	I1205 11:15:56.206369   12581 start.go:83] releasing machines lock for "old-k8s-version-811000", held for 21.212542ms
	W1205 11:15:56.206505   12581 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-811000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-811000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:15:56.212964   12581 out.go:201] 
	W1205 11:15:56.217321   12581 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:15:56.217343   12581 out.go:270] * 
	* 
	W1205 11:15:56.218869   12581 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:15:56.227977   12581 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-811000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-811000 -n old-k8s-version-811000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-811000 -n old-k8s-version-811000: exit status 7 (48.804708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-811000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-811000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-811000 -n old-k8s-version-811000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-811000 -n old-k8s-version-811000: exit status 7 (33.690375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-811000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-811000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-811000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-811000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.147125ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-811000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-811000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-811000 -n old-k8s-version-811000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-811000 -n old-k8s-version-811000: exit status 7 (34.364708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-811000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-811000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-811000 -n old-k8s-version-811000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-811000 -n old-k8s-version-811000: exit status 7 (34.138791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-811000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-811000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-811000 --alsologtostderr -v=1: exit status 83 (48.070791ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-811000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-811000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:15:56.495135   12600 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:15:56.496242   12600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:15:56.496249   12600 out.go:358] Setting ErrFile to fd 2...
	I1205 11:15:56.496251   12600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:15:56.496441   12600 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:15:56.496661   12600 out.go:352] Setting JSON to false
	I1205 11:15:56.496669   12600 mustload.go:65] Loading cluster: old-k8s-version-811000
	I1205 11:15:56.496872   12600 config.go:182] Loaded profile config "old-k8s-version-811000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1205 11:15:56.501418   12600 out.go:177] * The control-plane node old-k8s-version-811000 host is not running: state=Stopped
	I1205 11:15:56.505463   12600 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-811000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-811000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-811000 -n old-k8s-version-811000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-811000 -n old-k8s-version-811000: exit status 7 (32.266708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-811000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-811000 -n old-k8s-version-811000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-811000 -n old-k8s-version-811000: exit status 7 (34.188458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-811000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-842000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-842000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (9.823827708s)

                                                
                                                
-- stdout --
	* [no-preload-842000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-842000" primary control-plane node in "no-preload-842000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-842000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:15:56.839953   12617 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:15:56.840130   12617 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:15:56.840136   12617 out.go:358] Setting ErrFile to fd 2...
	I1205 11:15:56.840138   12617 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:15:56.840269   12617 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:15:56.841499   12617 out.go:352] Setting JSON to false
	I1205 11:15:56.859513   12617 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6328,"bootTime":1733419828,"procs":546,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 11:15:56.859598   12617 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:15:56.863657   12617 out.go:177] * [no-preload-842000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:15:56.870848   12617 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 11:15:56.870929   12617 notify.go:220] Checking for updates...
	I1205 11:15:56.879855   12617 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 11:15:56.882855   12617 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:15:56.886800   12617 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:15:56.889777   12617 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	I1205 11:15:56.892828   12617 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:15:56.896082   12617 config.go:182] Loaded profile config "multinode-454000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:15:56.896163   12617 config.go:182] Loaded profile config "stopped-upgrade-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1205 11:15:56.896212   12617 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:15:56.900809   12617 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:15:56.906734   12617 start.go:297] selected driver: qemu2
	I1205 11:15:56.906740   12617 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:15:56.906750   12617 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:15:56.909197   12617 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:15:56.911768   12617 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:15:56.914924   12617 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:15:56.914945   12617 cni.go:84] Creating CNI manager for ""
	I1205 11:15:56.914971   12617 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:15:56.914974   12617 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 11:15:56.915017   12617 start.go:340] cluster config:
	{Name:no-preload-842000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:15:56.919291   12617 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:15:56.927797   12617 out.go:177] * Starting "no-preload-842000" primary control-plane node in "no-preload-842000" cluster
	I1205 11:15:56.931862   12617 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:15:56.931920   12617 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/no-preload-842000/config.json ...
	I1205 11:15:56.931934   12617 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/no-preload-842000/config.json: {Name:mk3faaddfb55f2896fb24ecc759231cd6b8f6d1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:15:56.931953   12617 cache.go:107] acquiring lock: {Name:mk25e8524c7a11929b56e532b1f7fd5a0db79d9b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:15:56.931967   12617 cache.go:107] acquiring lock: {Name:mk0be87460cb9f59b87d8d68a640070dfa12d90e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:15:56.931953   12617 cache.go:107] acquiring lock: {Name:mkd330a138939dbc9a018231a2fd94f19abb61f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:15:56.931978   12617 cache.go:107] acquiring lock: {Name:mk0b2d3cf9074aedb5496164de6ba903b278f426 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:15:56.931962   12617 cache.go:107] acquiring lock: {Name:mk52ed2701d6b0872ead66e653126940a186727c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:15:56.931986   12617 cache.go:107] acquiring lock: {Name:mk91fc7e3814b12eb390e034f12a79ffeaea72c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:15:56.931986   12617 cache.go:107] acquiring lock: {Name:mkd40ccf97204623b134da4591582d43ed55dd19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:15:56.931984   12617 cache.go:107] acquiring lock: {Name:mkd7a78059f2125eef4449e7722460653af31b0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:15:56.932052   12617 cache.go:115] /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 11:15:56.932060   12617 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 106.875µs
	I1205 11:15:56.932066   12617 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 11:15:56.932199   12617 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1205 11:15:56.932321   12617 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 11:15:56.932331   12617 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 11:15:56.932422   12617 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1205 11:15:56.932497   12617 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 11:15:56.932528   12617 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 11:15:56.932542   12617 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 11:15:56.932500   12617 start.go:360] acquireMachinesLock for no-preload-842000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:15:56.932602   12617 start.go:364] duration metric: took 36µs to acquireMachinesLock for "no-preload-842000"
	I1205 11:15:56.932615   12617 start.go:93] Provisioning new machine with config: &{Name:no-preload-842000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:no-preload-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:15:56.932657   12617 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:15:56.939837   12617 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 11:15:56.943807   12617 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1205 11:15:56.943938   12617 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 11:15:56.945445   12617 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 11:15:56.945436   12617 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 11:15:56.947387   12617 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 11:15:56.947411   12617 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1205 11:15:56.947420   12617 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 11:15:56.956032   12617 start.go:159] libmachine.API.Create for "no-preload-842000" (driver="qemu2")
	I1205 11:15:56.956051   12617 client.go:168] LocalClient.Create starting
	I1205 11:15:56.956141   12617 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 11:15:56.956177   12617 main.go:141] libmachine: Decoding PEM data...
	I1205 11:15:56.956188   12617 main.go:141] libmachine: Parsing certificate...
	I1205 11:15:56.956225   12617 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 11:15:56.956254   12617 main.go:141] libmachine: Decoding PEM data...
	I1205 11:15:56.956265   12617 main.go:141] libmachine: Parsing certificate...
	I1205 11:15:56.956630   12617 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:15:57.120807   12617 main.go:141] libmachine: Creating SSH key...
	I1205 11:15:57.199598   12617 main.go:141] libmachine: Creating Disk image...
	I1205 11:15:57.199686   12617 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:15:57.199929   12617 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/no-preload-842000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/no-preload-842000/disk.qcow2
	I1205 11:15:57.209854   12617 main.go:141] libmachine: STDOUT: 
	I1205 11:15:57.209872   12617 main.go:141] libmachine: STDERR: 
	I1205 11:15:57.209932   12617 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/no-preload-842000/disk.qcow2 +20000M
	I1205 11:15:57.219279   12617 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:15:57.219295   12617 main.go:141] libmachine: STDERR: 
	I1205 11:15:57.219312   12617 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/no-preload-842000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/no-preload-842000/disk.qcow2
	I1205 11:15:57.219317   12617 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:15:57.219332   12617 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:15:57.219364   12617 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/no-preload-842000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/no-preload-842000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/no-preload-842000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:4f:56:be:5a:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/no-preload-842000/disk.qcow2
	I1205 11:15:57.221493   12617 main.go:141] libmachine: STDOUT: 
	I1205 11:15:57.221509   12617 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:15:57.221536   12617 client.go:171] duration metric: took 265.478292ms to LocalClient.Create
	I1205 11:15:57.339997   12617 cache.go:162] opening:  /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I1205 11:15:57.400777   12617 cache.go:162] opening:  /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2
	I1205 11:15:57.424880   12617 cache.go:162] opening:  /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1205 11:15:57.494584   12617 cache.go:162] opening:  /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2
	I1205 11:15:57.528653   12617 cache.go:157] /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1205 11:15:57.528664   12617 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 596.685541ms
	I1205 11:15:57.528671   12617 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1205 11:15:57.550389   12617 cache.go:162] opening:  /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2
	I1205 11:15:57.594772   12617 cache.go:162] opening:  /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I1205 11:15:57.675272   12617 cache.go:162] opening:  /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I1205 11:15:59.221676   12617 start.go:128] duration metric: took 2.28899675s to createHost
	I1205 11:15:59.221712   12617 start.go:83] releasing machines lock for "no-preload-842000", held for 2.289099291s
	W1205 11:15:59.221745   12617 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:15:59.229979   12617 out.go:177] * Deleting "no-preload-842000" in qemu2 ...
	W1205 11:15:59.253340   12617 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:15:59.253353   12617 start.go:729] Will try again in 5 seconds ...
	I1205 11:16:01.137974   12617 cache.go:157] /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 exists
	I1205 11:16:01.138068   12617 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.2" -> "/Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2" took 4.206082708s
	I1205 11:16:01.138117   12617 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.2 -> /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 succeeded
	I1205 11:16:01.257981   12617 cache.go:157] /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 exists
	I1205 11:16:01.258027   12617 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.2" -> "/Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2" took 4.326032083s
	I1205 11:16:01.258049   12617 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.2 -> /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 succeeded
	I1205 11:16:02.257949   12617 cache.go:157] /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1205 11:16:02.258020   12617 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 5.326014167s
	I1205 11:16:02.258045   12617 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1205 11:16:02.778527   12617 cache.go:157] /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 exists
	I1205 11:16:02.778575   12617 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.2" -> "/Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2" took 5.846608959s
	I1205 11:16:02.778600   12617 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.2 -> /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 succeeded
	I1205 11:16:03.051408   12617 cache.go:157] /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 exists
	I1205 11:16:03.051490   12617 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.2" -> "/Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2" took 6.119517375s
	I1205 11:16:03.051521   12617 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.2 -> /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 succeeded
	I1205 11:16:04.254742   12617 start.go:360] acquireMachinesLock for no-preload-842000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:16:04.255454   12617 start.go:364] duration metric: took 597.166µs to acquireMachinesLock for "no-preload-842000"
	I1205 11:16:04.255664   12617 start.go:93] Provisioning new machine with config: &{Name:no-preload-842000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:no-preload-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:16:04.255998   12617 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:16:04.266825   12617 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 11:16:04.316928   12617 start.go:159] libmachine.API.Create for "no-preload-842000" (driver="qemu2")
	I1205 11:16:04.316984   12617 client.go:168] LocalClient.Create starting
	I1205 11:16:04.317134   12617 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 11:16:04.317241   12617 main.go:141] libmachine: Decoding PEM data...
	I1205 11:16:04.317264   12617 main.go:141] libmachine: Parsing certificate...
	I1205 11:16:04.317348   12617 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 11:16:04.317404   12617 main.go:141] libmachine: Decoding PEM data...
	I1205 11:16:04.317423   12617 main.go:141] libmachine: Parsing certificate...
	I1205 11:16:04.318020   12617 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:16:04.487757   12617 main.go:141] libmachine: Creating SSH key...
	I1205 11:16:04.561198   12617 main.go:141] libmachine: Creating Disk image...
	I1205 11:16:04.561205   12617 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:16:04.561410   12617 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/no-preload-842000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/no-preload-842000/disk.qcow2
	I1205 11:16:04.571472   12617 main.go:141] libmachine: STDOUT: 
	I1205 11:16:04.571488   12617 main.go:141] libmachine: STDERR: 
	I1205 11:16:04.571562   12617 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/no-preload-842000/disk.qcow2 +20000M
	I1205 11:16:04.580266   12617 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:16:04.580305   12617 main.go:141] libmachine: STDERR: 
	I1205 11:16:04.580327   12617 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/no-preload-842000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/no-preload-842000/disk.qcow2
	I1205 11:16:04.580331   12617 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:16:04.580345   12617 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:16:04.580387   12617 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/no-preload-842000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/no-preload-842000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/no-preload-842000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:88:c5:e8:81:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/no-preload-842000/disk.qcow2
	I1205 11:16:04.582406   12617 main.go:141] libmachine: STDOUT: 
	I1205 11:16:04.582420   12617 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:16:04.582437   12617 client.go:171] duration metric: took 265.447834ms to LocalClient.Create
	I1205 11:16:06.582759   12617 start.go:128] duration metric: took 2.326672333s to createHost
	I1205 11:16:06.582832   12617 start.go:83] releasing machines lock for "no-preload-842000", held for 2.327326416s
	W1205 11:16:06.583147   12617 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-842000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-842000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:16:06.593694   12617 out.go:201] 
	W1205 11:16:06.601886   12617 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:16:06.601991   12617 out.go:270] * 
	* 
	W1205 11:16:06.604803   12617 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:16:06.614673   12617 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-842000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-842000 -n no-preload-842000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-842000 -n no-preload-842000: exit status 7 (70.765416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-842000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-089000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-089000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (9.840778834s)

                                                
                                                
-- stdout --
	* [embed-certs-089000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-089000" primary control-plane node in "embed-certs-089000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-089000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:15:59.940370   12658 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:15:59.940526   12658 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:15:59.940529   12658 out.go:358] Setting ErrFile to fd 2...
	I1205 11:15:59.940532   12658 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:15:59.940675   12658 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:15:59.941973   12658 out.go:352] Setting JSON to false
	I1205 11:15:59.960565   12658 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6331,"bootTime":1733419828,"procs":543,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 11:15:59.960659   12658 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:15:59.965642   12658 out.go:177] * [embed-certs-089000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:15:59.973583   12658 notify.go:220] Checking for updates...
	I1205 11:15:59.978648   12658 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 11:15:59.986546   12658 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 11:15:59.994549   12658 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:16:00.002560   12658 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:16:00.006563   12658 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	I1205 11:16:00.014564   12658 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:16:00.018978   12658 config.go:182] Loaded profile config "multinode-454000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:16:00.019057   12658 config.go:182] Loaded profile config "no-preload-842000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:16:00.019112   12658 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:16:00.023599   12658 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:16:00.030518   12658 start.go:297] selected driver: qemu2
	I1205 11:16:00.030523   12658 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:16:00.030532   12658 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:16:00.033370   12658 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:16:00.036624   12658 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:16:00.040680   12658 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:16:00.040715   12658 cni.go:84] Creating CNI manager for ""
	I1205 11:16:00.040744   12658 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:16:00.040756   12658 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 11:16:00.040789   12658 start.go:340] cluster config:
	{Name:embed-certs-089000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-089000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:16:00.045917   12658 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:16:00.053537   12658 out.go:177] * Starting "embed-certs-089000" primary control-plane node in "embed-certs-089000" cluster
	I1205 11:16:00.056587   12658 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:16:00.056611   12658 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:16:00.056626   12658 cache.go:56] Caching tarball of preloaded images
	I1205 11:16:00.056718   12658 preload.go:172] Found /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:16:00.056729   12658 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:16:00.056798   12658 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/embed-certs-089000/config.json ...
	I1205 11:16:00.056810   12658 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/embed-certs-089000/config.json: {Name:mkc0f3708692e011dc78c947384ff490e8139574 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:16:00.057126   12658 start.go:360] acquireMachinesLock for embed-certs-089000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:16:00.057182   12658 start.go:364] duration metric: took 49µs to acquireMachinesLock for "embed-certs-089000"
	I1205 11:16:00.057198   12658 start.go:93] Provisioning new machine with config: &{Name:embed-certs-089000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:embed-certs-089000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:16:00.057230   12658 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:16:00.065518   12658 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 11:16:00.083694   12658 start.go:159] libmachine.API.Create for "embed-certs-089000" (driver="qemu2")
	I1205 11:16:00.083729   12658 client.go:168] LocalClient.Create starting
	I1205 11:16:00.083812   12658 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 11:16:00.083852   12658 main.go:141] libmachine: Decoding PEM data...
	I1205 11:16:00.083863   12658 main.go:141] libmachine: Parsing certificate...
	I1205 11:16:00.083918   12658 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 11:16:00.083950   12658 main.go:141] libmachine: Decoding PEM data...
	I1205 11:16:00.083959   12658 main.go:141] libmachine: Parsing certificate...
	I1205 11:16:00.084375   12658 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:16:00.244021   12658 main.go:141] libmachine: Creating SSH key...
	I1205 11:16:00.292347   12658 main.go:141] libmachine: Creating Disk image...
	I1205 11:16:00.292354   12658 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:16:00.292581   12658 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/embed-certs-089000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/embed-certs-089000/disk.qcow2
	I1205 11:16:00.302502   12658 main.go:141] libmachine: STDOUT: 
	I1205 11:16:00.302519   12658 main.go:141] libmachine: STDERR: 
	I1205 11:16:00.302574   12658 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/embed-certs-089000/disk.qcow2 +20000M
	I1205 11:16:00.311472   12658 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:16:00.311489   12658 main.go:141] libmachine: STDERR: 
	I1205 11:16:00.311505   12658 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/embed-certs-089000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/embed-certs-089000/disk.qcow2
	I1205 11:16:00.311511   12658 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:16:00.311525   12658 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:16:00.311553   12658 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/embed-certs-089000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/embed-certs-089000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/embed-certs-089000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:58:b5:bb:48:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/embed-certs-089000/disk.qcow2
	I1205 11:16:00.313409   12658 main.go:141] libmachine: STDOUT: 
	I1205 11:16:00.313423   12658 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:16:00.313453   12658 client.go:171] duration metric: took 229.717541ms to LocalClient.Create
	I1205 11:16:02.315660   12658 start.go:128] duration metric: took 2.258400833s to createHost
	I1205 11:16:02.315741   12658 start.go:83] releasing machines lock for "embed-certs-089000", held for 2.258543833s
	W1205 11:16:02.315823   12658 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:16:02.325953   12658 out.go:177] * Deleting "embed-certs-089000" in qemu2 ...
	W1205 11:16:02.353777   12658 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:16:02.353810   12658 start.go:729] Will try again in 5 seconds ...
	I1205 11:16:07.355869   12658 start.go:360] acquireMachinesLock for embed-certs-089000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:16:07.356209   12658 start.go:364] duration metric: took 278.334µs to acquireMachinesLock for "embed-certs-089000"
	I1205 11:16:07.356338   12658 start.go:93] Provisioning new machine with config: &{Name:embed-certs-089000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:embed-certs-089000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:16:07.356631   12658 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:16:07.362308   12658 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 11:16:07.403043   12658 start.go:159] libmachine.API.Create for "embed-certs-089000" (driver="qemu2")
	I1205 11:16:07.403102   12658 client.go:168] LocalClient.Create starting
	I1205 11:16:07.403211   12658 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 11:16:07.403266   12658 main.go:141] libmachine: Decoding PEM data...
	I1205 11:16:07.403284   12658 main.go:141] libmachine: Parsing certificate...
	I1205 11:16:07.403350   12658 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 11:16:07.403382   12658 main.go:141] libmachine: Decoding PEM data...
	I1205 11:16:07.403394   12658 main.go:141] libmachine: Parsing certificate...
	I1205 11:16:07.404028   12658 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:16:07.595105   12658 main.go:141] libmachine: Creating SSH key...
	I1205 11:16:07.677992   12658 main.go:141] libmachine: Creating Disk image...
	I1205 11:16:07.677999   12658 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:16:07.678200   12658 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/embed-certs-089000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/embed-certs-089000/disk.qcow2
	I1205 11:16:07.688314   12658 main.go:141] libmachine: STDOUT: 
	I1205 11:16:07.688332   12658 main.go:141] libmachine: STDERR: 
	I1205 11:16:07.688401   12658 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/embed-certs-089000/disk.qcow2 +20000M
	I1205 11:16:07.696951   12658 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:16:07.696965   12658 main.go:141] libmachine: STDERR: 
	I1205 11:16:07.696979   12658 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/embed-certs-089000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/embed-certs-089000/disk.qcow2
	I1205 11:16:07.696983   12658 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:16:07.696992   12658 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:16:07.697031   12658 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/embed-certs-089000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/embed-certs-089000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/embed-certs-089000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:7a:17:2b:88:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/embed-certs-089000/disk.qcow2
	I1205 11:16:07.698862   12658 main.go:141] libmachine: STDOUT: 
	I1205 11:16:07.698874   12658 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:16:07.698886   12658 client.go:171] duration metric: took 295.776459ms to LocalClient.Create
	I1205 11:16:09.701103   12658 start.go:128] duration metric: took 2.344422041s to createHost
	I1205 11:16:09.701214   12658 start.go:83] releasing machines lock for "embed-certs-089000", held for 2.344976292s
	W1205 11:16:09.701686   12658 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-089000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-089000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:16:09.715487   12658 out.go:201] 
	W1205 11:16:09.720593   12658 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:16:09.720636   12658 out.go:270] * 
	* 
	W1205 11:16:09.723363   12658 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:16:09.732416   12658 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-089000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000: exit status 7 (68.354875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-089000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-842000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-842000 create -f testdata/busybox.yaml: exit status 1 (29.94225ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-842000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-842000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-842000 -n no-preload-842000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-842000 -n no-preload-842000: exit status 7 (33.388083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-842000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-842000 -n no-preload-842000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-842000 -n no-preload-842000: exit status 7 (32.787ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-842000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-842000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-842000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-842000 describe deploy/metrics-server -n kube-system: exit status 1 (27.281458ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-842000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-842000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-842000 -n no-preload-842000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-842000 -n no-preload-842000: exit status 7 (33.719292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-842000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-089000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-089000 create -f testdata/busybox.yaml: exit status 1 (29.334833ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-089000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-089000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000: exit status 7 (33.467459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-089000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000: exit status 7 (33.391375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-089000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-089000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-089000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-089000 describe deploy/metrics-server -n kube-system: exit status 1 (27.972666ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-089000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-089000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000: exit status 7 (32.674542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-089000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-842000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-842000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (5.194512792s)

                                                
                                                
-- stdout --
	* [no-preload-842000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-842000" primary control-plane node in "no-preload-842000" cluster
	* Restarting existing qemu2 VM for "no-preload-842000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-842000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:16:10.656162   12728 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:16:10.656314   12728 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:16:10.656317   12728 out.go:358] Setting ErrFile to fd 2...
	I1205 11:16:10.656320   12728 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:16:10.656454   12728 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:16:10.657568   12728 out.go:352] Setting JSON to false
	I1205 11:16:10.675305   12728 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6342,"bootTime":1733419828,"procs":543,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 11:16:10.675381   12728 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:16:10.680203   12728 out.go:177] * [no-preload-842000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:16:10.687252   12728 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 11:16:10.687293   12728 notify.go:220] Checking for updates...
	I1205 11:16:10.694151   12728 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 11:16:10.697101   12728 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:16:10.700180   12728 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:16:10.703158   12728 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	I1205 11:16:10.704525   12728 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:16:10.707477   12728 config.go:182] Loaded profile config "no-preload-842000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:16:10.707749   12728 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:16:10.711224   12728 out.go:177] * Using the qemu2 driver based on existing profile
	I1205 11:16:10.716209   12728 start.go:297] selected driver: qemu2
	I1205 11:16:10.716218   12728 start.go:901] validating driver "qemu2" against &{Name:no-preload-842000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:no-preload-842000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:16:10.716276   12728 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:16:10.718733   12728 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:16:10.718757   12728 cni.go:84] Creating CNI manager for ""
	I1205 11:16:10.718775   12728 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:16:10.718805   12728 start.go:340] cluster config:
	{Name:no-preload-842000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-842000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:16:10.723120   12728 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:16:10.731102   12728 out.go:177] * Starting "no-preload-842000" primary control-plane node in "no-preload-842000" cluster
	I1205 11:16:10.735121   12728 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:16:10.735179   12728 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/no-preload-842000/config.json ...
	I1205 11:16:10.735209   12728 cache.go:107] acquiring lock: {Name:mk25e8524c7a11929b56e532b1f7fd5a0db79d9b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:16:10.735220   12728 cache.go:107] acquiring lock: {Name:mk0b2d3cf9074aedb5496164de6ba903b278f426 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:16:10.735291   12728 cache.go:115] /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1205 11:16:10.735298   12728 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 90.625µs
	I1205 11:16:10.735303   12728 cache.go:115] /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 exists
	I1205 11:16:10.735310   12728 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.2" -> "/Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2" took 95.542µs
	I1205 11:16:10.735314   12728 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.2 -> /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 succeeded
	I1205 11:16:10.735306   12728 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1205 11:16:10.735326   12728 cache.go:107] acquiring lock: {Name:mkd7a78059f2125eef4449e7722460653af31b0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:16:10.735338   12728 cache.go:107] acquiring lock: {Name:mk0be87460cb9f59b87d8d68a640070dfa12d90e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:16:10.735363   12728 cache.go:107] acquiring lock: {Name:mk91fc7e3814b12eb390e034f12a79ffeaea72c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:16:10.735354   12728 cache.go:107] acquiring lock: {Name:mkd40ccf97204623b134da4591582d43ed55dd19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:16:10.735368   12728 cache.go:107] acquiring lock: {Name:mk52ed2701d6b0872ead66e653126940a186727c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:16:10.735402   12728 cache.go:115] /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1205 11:16:10.735406   12728 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 81.208µs
	I1205 11:16:10.735414   12728 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1205 11:16:10.735392   12728 cache.go:107] acquiring lock: {Name:mkd330a138939dbc9a018231a2fd94f19abb61f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:16:10.735476   12728 cache.go:115] /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1205 11:16:10.735487   12728 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 149.125µs
	I1205 11:16:10.735493   12728 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1205 11:16:10.735564   12728 cache.go:115] /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 exists
	I1205 11:16:10.735573   12728 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.2" -> "/Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2" took 259.208µs
	I1205 11:16:10.735577   12728 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.2 -> /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 succeeded
	I1205 11:16:10.735578   12728 cache.go:115] /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 exists
	I1205 11:16:10.735582   12728 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.2" -> "/Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2" took 211µs
	I1205 11:16:10.735588   12728 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.2 -> /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 succeeded
	I1205 11:16:10.735632   12728 cache.go:115] /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 exists
	I1205 11:16:10.735637   12728 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.2" -> "/Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2" took 345.667µs
	I1205 11:16:10.735642   12728 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.2 -> /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 succeeded
	I1205 11:16:10.735650   12728 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1205 11:16:10.735702   12728 start.go:360] acquireMachinesLock for no-preload-842000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:16:10.735737   12728 start.go:364] duration metric: took 28.125µs to acquireMachinesLock for "no-preload-842000"
	I1205 11:16:10.735746   12728 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:16:10.735752   12728 fix.go:54] fixHost starting: 
	I1205 11:16:10.735871   12728 fix.go:112] recreateIfNeeded on no-preload-842000: state=Stopped err=<nil>
	W1205 11:16:10.735877   12728 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:16:10.744132   12728 out.go:177] * Restarting existing qemu2 VM for "no-preload-842000" ...
	I1205 11:16:10.748147   12728 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:16:10.748201   12728 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/no-preload-842000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/no-preload-842000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/no-preload-842000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:88:c5:e8:81:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/no-preload-842000/disk.qcow2
	I1205 11:16:10.748380   12728 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1205 11:16:10.750602   12728 main.go:141] libmachine: STDOUT: 
	I1205 11:16:10.750628   12728 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:16:10.750656   12728 fix.go:56] duration metric: took 14.904667ms for fixHost
	I1205 11:16:10.750660   12728 start.go:83] releasing machines lock for "no-preload-842000", held for 14.9185ms
	W1205 11:16:10.750667   12728 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:16:10.750708   12728 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:16:10.750713   12728 start.go:729] Will try again in 5 seconds ...
	I1205 11:16:11.143864   12728 cache.go:162] opening:  /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I1205 11:16:15.751417   12728 start.go:360] acquireMachinesLock for no-preload-842000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:16:15.751805   12728 start.go:364] duration metric: took 318.916µs to acquireMachinesLock for "no-preload-842000"
	I1205 11:16:15.751942   12728 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:16:15.751964   12728 fix.go:54] fixHost starting: 
	I1205 11:16:15.752644   12728 fix.go:112] recreateIfNeeded on no-preload-842000: state=Stopped err=<nil>
	W1205 11:16:15.752675   12728 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:16:15.762205   12728 out.go:177] * Restarting existing qemu2 VM for "no-preload-842000" ...
	I1205 11:16:15.767136   12728 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:16:15.767330   12728 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/no-preload-842000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/no-preload-842000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/no-preload-842000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:88:c5:e8:81:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/no-preload-842000/disk.qcow2
	I1205 11:16:15.778712   12728 main.go:141] libmachine: STDOUT: 
	I1205 11:16:15.778783   12728 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:16:15.778859   12728 fix.go:56] duration metric: took 26.89375ms for fixHost
	I1205 11:16:15.778876   12728 start.go:83] releasing machines lock for "no-preload-842000", held for 27.050125ms
	W1205 11:16:15.779063   12728 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-842000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-842000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:16:15.788162   12728 out.go:201] 
	W1205 11:16:15.792324   12728 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:16:15.792385   12728 out.go:270] * 
	* 
	W1205 11:16:15.795258   12728 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:16:15.803189   12728 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-842000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-842000 -n no-preload-842000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-842000 -n no-preload-842000: exit status 7 (72.298417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-842000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-089000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-089000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (5.2248285s)

                                                
                                                
-- stdout --
	* [embed-certs-089000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-089000" primary control-plane node in "embed-certs-089000" cluster
	* Restarting existing qemu2 VM for "embed-certs-089000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-089000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:16:14.152566   12758 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:16:14.152729   12758 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:16:14.152732   12758 out.go:358] Setting ErrFile to fd 2...
	I1205 11:16:14.152734   12758 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:16:14.152874   12758 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:16:14.153959   12758 out.go:352] Setting JSON to false
	I1205 11:16:14.171560   12758 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6346,"bootTime":1733419828,"procs":543,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 11:16:14.171624   12758 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:16:14.176898   12758 out.go:177] * [embed-certs-089000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:16:14.184850   12758 notify.go:220] Checking for updates...
	I1205 11:16:14.188802   12758 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 11:16:14.194788   12758 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 11:16:14.198837   12758 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:16:14.202846   12758 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:16:14.209832   12758 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	I1205 11:16:14.213878   12758 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:16:14.218015   12758 config.go:182] Loaded profile config "embed-certs-089000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:16:14.218277   12758 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:16:14.222838   12758 out.go:177] * Using the qemu2 driver based on existing profile
	I1205 11:16:14.229875   12758 start.go:297] selected driver: qemu2
	I1205 11:16:14.229884   12758 start.go:901] validating driver "qemu2" against &{Name:embed-certs-089000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:embed-certs-089000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:16:14.229953   12758 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:16:14.232458   12758 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:16:14.232483   12758 cni.go:84] Creating CNI manager for ""
	I1205 11:16:14.232509   12758 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:16:14.232535   12758 start.go:340] cluster config:
	{Name:embed-certs-089000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-089000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:16:14.236987   12758 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:16:14.254614   12758 out.go:177] * Starting "embed-certs-089000" primary control-plane node in "embed-certs-089000" cluster
	I1205 11:16:14.258794   12758 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:16:14.258808   12758 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:16:14.258817   12758 cache.go:56] Caching tarball of preloaded images
	I1205 11:16:14.258895   12758 preload.go:172] Found /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:16:14.258901   12758 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:16:14.258964   12758 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/embed-certs-089000/config.json ...
	I1205 11:16:14.259302   12758 start.go:360] acquireMachinesLock for embed-certs-089000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:16:14.259333   12758 start.go:364] duration metric: took 24.625µs to acquireMachinesLock for "embed-certs-089000"
	I1205 11:16:14.259342   12758 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:16:14.259347   12758 fix.go:54] fixHost starting: 
	I1205 11:16:14.259466   12758 fix.go:112] recreateIfNeeded on embed-certs-089000: state=Stopped err=<nil>
	W1205 11:16:14.259475   12758 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:16:14.267873   12758 out.go:177] * Restarting existing qemu2 VM for "embed-certs-089000" ...
	I1205 11:16:14.271846   12758 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:16:14.271894   12758 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/embed-certs-089000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/embed-certs-089000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/embed-certs-089000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:7a:17:2b:88:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/embed-certs-089000/disk.qcow2
	I1205 11:16:14.274126   12758 main.go:141] libmachine: STDOUT: 
	I1205 11:16:14.274152   12758 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:16:14.274186   12758 fix.go:56] duration metric: took 14.837583ms for fixHost
	I1205 11:16:14.274191   12758 start.go:83] releasing machines lock for "embed-certs-089000", held for 14.853333ms
	W1205 11:16:14.274196   12758 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:16:14.274242   12758 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:16:14.274247   12758 start.go:729] Will try again in 5 seconds ...
	I1205 11:16:19.276465   12758 start.go:360] acquireMachinesLock for embed-certs-089000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:16:19.277025   12758 start.go:364] duration metric: took 455.917µs to acquireMachinesLock for "embed-certs-089000"
	I1205 11:16:19.277180   12758 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:16:19.277201   12758 fix.go:54] fixHost starting: 
	I1205 11:16:19.278009   12758 fix.go:112] recreateIfNeeded on embed-certs-089000: state=Stopped err=<nil>
	W1205 11:16:19.278037   12758 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:16:19.296611   12758 out.go:177] * Restarting existing qemu2 VM for "embed-certs-089000" ...
	I1205 11:16:19.300491   12758 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:16:19.300729   12758 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/embed-certs-089000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/embed-certs-089000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/embed-certs-089000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:7a:17:2b:88:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/embed-certs-089000/disk.qcow2
	I1205 11:16:19.311111   12758 main.go:141] libmachine: STDOUT: 
	I1205 11:16:19.311174   12758 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:16:19.311272   12758 fix.go:56] duration metric: took 34.073458ms for fixHost
	I1205 11:16:19.311296   12758 start.go:83] releasing machines lock for "embed-certs-089000", held for 34.246458ms
	W1205 11:16:19.311552   12758 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-089000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-089000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:16:19.320528   12758 out.go:201] 
	W1205 11:16:19.324592   12758 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:16:19.324617   12758 out.go:270] * 
	* 
	W1205 11:16:19.327260   12758 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:16:19.334106   12758 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-089000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000: exit status 7 (70.079875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-089000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-842000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-842000 -n no-preload-842000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-842000 -n no-preload-842000: exit status 7 (34.627459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-842000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-842000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-842000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-842000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.895459ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-842000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-842000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-842000 -n no-preload-842000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-842000 -n no-preload-842000: exit status 7 (33.174291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-842000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-842000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-842000 -n no-preload-842000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-842000 -n no-preload-842000: exit status 7 (32.9135ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-842000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-842000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-842000 --alsologtostderr -v=1: exit status 83 (45.666833ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-842000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-842000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:16:16.097672   12777 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:16:16.097880   12777 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:16:16.097884   12777 out.go:358] Setting ErrFile to fd 2...
	I1205 11:16:16.097886   12777 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:16:16.098011   12777 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:16:16.098230   12777 out.go:352] Setting JSON to false
	I1205 11:16:16.098237   12777 mustload.go:65] Loading cluster: no-preload-842000
	I1205 11:16:16.098469   12777 config.go:182] Loaded profile config "no-preload-842000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:16:16.102753   12777 out.go:177] * The control-plane node no-preload-842000 host is not running: state=Stopped
	I1205 11:16:16.106619   12777 out.go:177]   To start a cluster, run: "minikube start -p no-preload-842000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-842000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-842000 -n no-preload-842000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-842000 -n no-preload-842000: exit status 7 (33.075083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-842000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-842000 -n no-preload-842000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-842000 -n no-preload-842000: exit status 7 (32.711542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-842000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-701000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-701000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (10.067762208s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-701000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-701000" primary control-plane node in "default-k8s-diff-port-701000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-701000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:16:16.555297   12801 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:16:16.555451   12801 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:16:16.555454   12801 out.go:358] Setting ErrFile to fd 2...
	I1205 11:16:16.555457   12801 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:16:16.555564   12801 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:16:16.556751   12801 out.go:352] Setting JSON to false
	I1205 11:16:16.574845   12801 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6348,"bootTime":1733419828,"procs":543,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 11:16:16.574931   12801 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:16:16.578767   12801 out.go:177] * [default-k8s-diff-port-701000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:16:16.587697   12801 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 11:16:16.587764   12801 notify.go:220] Checking for updates...
	I1205 11:16:16.594547   12801 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 11:16:16.598676   12801 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:16:16.601619   12801 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:16:16.604621   12801 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	I1205 11:16:16.607720   12801 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:16:16.610923   12801 config.go:182] Loaded profile config "embed-certs-089000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:16:16.610982   12801 config.go:182] Loaded profile config "multinode-454000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:16:16.611037   12801 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:16:16.615658   12801 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:16:16.621678   12801 start.go:297] selected driver: qemu2
	I1205 11:16:16.621686   12801 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:16:16.621699   12801 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:16:16.624264   12801 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 11:16:16.627578   12801 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:16:16.630734   12801 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:16:16.630750   12801 cni.go:84] Creating CNI manager for ""
	I1205 11:16:16.630770   12801 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:16:16.630774   12801 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 11:16:16.630803   12801 start.go:340] cluster config:
	{Name:default-k8s-diff-port-701000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-701000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:16:16.635486   12801 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:16:16.643636   12801 out.go:177] * Starting "default-k8s-diff-port-701000" primary control-plane node in "default-k8s-diff-port-701000" cluster
	I1205 11:16:16.647680   12801 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:16:16.647698   12801 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:16:16.647712   12801 cache.go:56] Caching tarball of preloaded images
	I1205 11:16:16.647795   12801 preload.go:172] Found /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:16:16.647801   12801 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:16:16.647867   12801 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/default-k8s-diff-port-701000/config.json ...
	I1205 11:16:16.647881   12801 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/default-k8s-diff-port-701000/config.json: {Name:mk44a4e94f9b26f6164dbb260872563dc2912b70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:16:16.648366   12801 start.go:360] acquireMachinesLock for default-k8s-diff-port-701000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:16:16.648419   12801 start.go:364] duration metric: took 45.542µs to acquireMachinesLock for "default-k8s-diff-port-701000"
	I1205 11:16:16.648432   12801 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-701000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-701000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:16:16.648461   12801 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:16:16.655663   12801 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 11:16:16.673374   12801 start.go:159] libmachine.API.Create for "default-k8s-diff-port-701000" (driver="qemu2")
	I1205 11:16:16.673403   12801 client.go:168] LocalClient.Create starting
	I1205 11:16:16.673478   12801 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 11:16:16.673523   12801 main.go:141] libmachine: Decoding PEM data...
	I1205 11:16:16.673539   12801 main.go:141] libmachine: Parsing certificate...
	I1205 11:16:16.673580   12801 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 11:16:16.673611   12801 main.go:141] libmachine: Decoding PEM data...
	I1205 11:16:16.673620   12801 main.go:141] libmachine: Parsing certificate...
	I1205 11:16:16.674084   12801 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:16:16.833103   12801 main.go:141] libmachine: Creating SSH key...
	I1205 11:16:16.977018   12801 main.go:141] libmachine: Creating Disk image...
	I1205 11:16:16.977025   12801 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:16:16.977265   12801 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/default-k8s-diff-port-701000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/default-k8s-diff-port-701000/disk.qcow2
	I1205 11:16:16.987427   12801 main.go:141] libmachine: STDOUT: 
	I1205 11:16:16.987448   12801 main.go:141] libmachine: STDERR: 
	I1205 11:16:16.987507   12801 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/default-k8s-diff-port-701000/disk.qcow2 +20000M
	I1205 11:16:16.995987   12801 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:16:16.996011   12801 main.go:141] libmachine: STDERR: 
	I1205 11:16:16.996025   12801 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/default-k8s-diff-port-701000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/default-k8s-diff-port-701000/disk.qcow2
	I1205 11:16:16.996030   12801 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:16:16.996045   12801 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:16:16.996075   12801 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/default-k8s-diff-port-701000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/default-k8s-diff-port-701000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/default-k8s-diff-port-701000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:31:50:63:b6:41 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/default-k8s-diff-port-701000/disk.qcow2
	I1205 11:16:16.997914   12801 main.go:141] libmachine: STDOUT: 
	I1205 11:16:16.997927   12801 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:16:16.997946   12801 client.go:171] duration metric: took 324.535709ms to LocalClient.Create
	I1205 11:16:19.000124   12801 start.go:128] duration metric: took 2.351639375s to createHost
	I1205 11:16:19.000207   12801 start.go:83] releasing machines lock for "default-k8s-diff-port-701000", held for 2.351768875s
	W1205 11:16:19.000300   12801 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:16:19.015461   12801 out.go:177] * Deleting "default-k8s-diff-port-701000" in qemu2 ...
	W1205 11:16:19.042660   12801 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:16:19.042711   12801 start.go:729] Will try again in 5 seconds ...
	I1205 11:16:24.044905   12801 start.go:360] acquireMachinesLock for default-k8s-diff-port-701000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:16:24.045429   12801 start.go:364] duration metric: took 429.208µs to acquireMachinesLock for "default-k8s-diff-port-701000"
	I1205 11:16:24.045590   12801 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-701000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-701000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:16:24.045944   12801 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:16:24.051632   12801 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 11:16:24.101221   12801 start.go:159] libmachine.API.Create for "default-k8s-diff-port-701000" (driver="qemu2")
	I1205 11:16:24.101286   12801 client.go:168] LocalClient.Create starting
	I1205 11:16:24.101436   12801 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 11:16:24.101534   12801 main.go:141] libmachine: Decoding PEM data...
	I1205 11:16:24.101558   12801 main.go:141] libmachine: Parsing certificate...
	I1205 11:16:24.101625   12801 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 11:16:24.101692   12801 main.go:141] libmachine: Decoding PEM data...
	I1205 11:16:24.101708   12801 main.go:141] libmachine: Parsing certificate...
	I1205 11:16:24.102320   12801 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:16:24.331717   12801 main.go:141] libmachine: Creating SSH key...
	I1205 11:16:24.522715   12801 main.go:141] libmachine: Creating Disk image...
	I1205 11:16:24.522721   12801 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:16:24.522939   12801 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/default-k8s-diff-port-701000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/default-k8s-diff-port-701000/disk.qcow2
	I1205 11:16:24.533275   12801 main.go:141] libmachine: STDOUT: 
	I1205 11:16:24.533299   12801 main.go:141] libmachine: STDERR: 
	I1205 11:16:24.533383   12801 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/default-k8s-diff-port-701000/disk.qcow2 +20000M
	I1205 11:16:24.541976   12801 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:16:24.541993   12801 main.go:141] libmachine: STDERR: 
	I1205 11:16:24.542004   12801 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/default-k8s-diff-port-701000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/default-k8s-diff-port-701000/disk.qcow2
	I1205 11:16:24.542011   12801 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:16:24.542022   12801 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:16:24.542057   12801 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/default-k8s-diff-port-701000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/default-k8s-diff-port-701000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/default-k8s-diff-port-701000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:8a:ff:5c:2b:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/default-k8s-diff-port-701000/disk.qcow2
	I1205 11:16:24.543870   12801 main.go:141] libmachine: STDOUT: 
	I1205 11:16:24.543888   12801 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:16:24.543902   12801 client.go:171] duration metric: took 442.608167ms to LocalClient.Create
	I1205 11:16:26.544646   12801 start.go:128] duration metric: took 2.498664542s to createHost
	I1205 11:16:26.544709   12801 start.go:83] releasing machines lock for "default-k8s-diff-port-701000", held for 2.499249334s
	W1205 11:16:26.545206   12801 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-701000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-701000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:16:26.556820   12801 out.go:201] 
	W1205 11:16:26.566046   12801 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:16:26.566084   12801 out.go:270] * 
	* 
	W1205 11:16:26.568718   12801 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:16:26.576892   12801 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-701000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-701000 -n default-k8s-diff-port-701000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-701000 -n default-k8s-diff-port-701000: exit status 7 (71.434333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-701000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-089000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000: exit status 7 (35.738583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-089000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-089000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-089000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-089000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.458042ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-089000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-089000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000: exit status 7 (32.91175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-089000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-089000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000: exit status 7 (33.187458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-089000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-089000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-089000 --alsologtostderr -v=1: exit status 83 (43.198208ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-089000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-089000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:16:19.622200   12823 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:16:19.622389   12823 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:16:19.622392   12823 out.go:358] Setting ErrFile to fd 2...
	I1205 11:16:19.622395   12823 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:16:19.622520   12823 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:16:19.622743   12823 out.go:352] Setting JSON to false
	I1205 11:16:19.622751   12823 mustload.go:65] Loading cluster: embed-certs-089000
	I1205 11:16:19.622985   12823 config.go:182] Loaded profile config "embed-certs-089000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:16:19.627422   12823 out.go:177] * The control-plane node embed-certs-089000 host is not running: state=Stopped
	I1205 11:16:19.631462   12823 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-089000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-089000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000: exit status 7 (33.046917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-089000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000: exit status 7 (33.123208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-089000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-626000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-626000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (9.879679583s)

                                                
                                                
-- stdout --
	* [newest-cni-626000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-626000" primary control-plane node in "newest-cni-626000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-626000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:16:19.961731   12840 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:16:19.961881   12840 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:16:19.961884   12840 out.go:358] Setting ErrFile to fd 2...
	I1205 11:16:19.961886   12840 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:16:19.962028   12840 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:16:19.963210   12840 out.go:352] Setting JSON to false
	I1205 11:16:19.981290   12840 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6351,"bootTime":1733419828,"procs":543,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 11:16:19.981360   12840 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:16:19.986502   12840 out.go:177] * [newest-cni-626000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:16:19.993544   12840 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 11:16:19.993610   12840 notify.go:220] Checking for updates...
	I1205 11:16:20.000454   12840 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 11:16:20.003487   12840 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:16:20.006420   12840 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:16:20.009457   12840 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	I1205 11:16:20.012439   12840 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:16:20.015815   12840 config.go:182] Loaded profile config "default-k8s-diff-port-701000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:16:20.015874   12840 config.go:182] Loaded profile config "multinode-454000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:16:20.015924   12840 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:16:20.020491   12840 out.go:177] * Using the qemu2 driver based on user configuration
	I1205 11:16:20.027433   12840 start.go:297] selected driver: qemu2
	I1205 11:16:20.027439   12840 start.go:901] validating driver "qemu2" against <nil>
	I1205 11:16:20.027445   12840 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:16:20.029889   12840 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1205 11:16:20.029927   12840 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1205 11:16:20.037489   12840 out.go:177] * Automatically selected the socket_vmnet network
	I1205 11:16:20.040513   12840 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1205 11:16:20.040536   12840 cni.go:84] Creating CNI manager for ""
	I1205 11:16:20.040559   12840 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:16:20.040568   12840 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 11:16:20.040610   12840 start.go:340] cluster config:
	{Name:newest-cni-626000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-626000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:16:20.045353   12840 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:16:20.053309   12840 out.go:177] * Starting "newest-cni-626000" primary control-plane node in "newest-cni-626000" cluster
	I1205 11:16:20.057454   12840 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:16:20.057477   12840 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:16:20.057491   12840 cache.go:56] Caching tarball of preloaded images
	I1205 11:16:20.057578   12840 preload.go:172] Found /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:16:20.057585   12840 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:16:20.057653   12840 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/newest-cni-626000/config.json ...
	I1205 11:16:20.057664   12840 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/newest-cni-626000/config.json: {Name:mk05a02632068c38096ee77ef75a1e86016638b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 11:16:20.058155   12840 start.go:360] acquireMachinesLock for newest-cni-626000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:16:20.058207   12840 start.go:364] duration metric: took 45.5µs to acquireMachinesLock for "newest-cni-626000"
	I1205 11:16:20.058219   12840 start.go:93] Provisioning new machine with config: &{Name:newest-cni-626000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:newest-cni-626000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:16:20.058255   12840 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:16:20.063340   12840 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 11:16:20.080865   12840 start.go:159] libmachine.API.Create for "newest-cni-626000" (driver="qemu2")
	I1205 11:16:20.080891   12840 client.go:168] LocalClient.Create starting
	I1205 11:16:20.080961   12840 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 11:16:20.081001   12840 main.go:141] libmachine: Decoding PEM data...
	I1205 11:16:20.081009   12840 main.go:141] libmachine: Parsing certificate...
	I1205 11:16:20.081044   12840 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 11:16:20.081075   12840 main.go:141] libmachine: Decoding PEM data...
	I1205 11:16:20.081085   12840 main.go:141] libmachine: Parsing certificate...
	I1205 11:16:20.081483   12840 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:16:20.239994   12840 main.go:141] libmachine: Creating SSH key...
	I1205 11:16:20.331199   12840 main.go:141] libmachine: Creating Disk image...
	I1205 11:16:20.331205   12840 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:16:20.331404   12840 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/newest-cni-626000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/newest-cni-626000/disk.qcow2
	I1205 11:16:20.341388   12840 main.go:141] libmachine: STDOUT: 
	I1205 11:16:20.341413   12840 main.go:141] libmachine: STDERR: 
	I1205 11:16:20.341476   12840 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/newest-cni-626000/disk.qcow2 +20000M
	I1205 11:16:20.350097   12840 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:16:20.350114   12840 main.go:141] libmachine: STDERR: 
	I1205 11:16:20.350136   12840 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/newest-cni-626000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/newest-cni-626000/disk.qcow2
	I1205 11:16:20.350141   12840 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:16:20.350154   12840 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:16:20.350194   12840 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/newest-cni-626000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/newest-cni-626000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/newest-cni-626000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:f0:60:b5:a2:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/newest-cni-626000/disk.qcow2
	I1205 11:16:20.352005   12840 main.go:141] libmachine: STDOUT: 
	I1205 11:16:20.352018   12840 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:16:20.352038   12840 client.go:171] duration metric: took 271.139084ms to LocalClient.Create
	I1205 11:16:22.354232   12840 start.go:128] duration metric: took 2.29594425s to createHost
	I1205 11:16:22.354348   12840 start.go:83] releasing machines lock for "newest-cni-626000", held for 2.2960815s
	W1205 11:16:22.354402   12840 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:16:22.364486   12840 out.go:177] * Deleting "newest-cni-626000" in qemu2 ...
	W1205 11:16:22.400657   12840 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:16:22.400697   12840 start.go:729] Will try again in 5 seconds ...
	I1205 11:16:27.402992   12840 start.go:360] acquireMachinesLock for newest-cni-626000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:16:27.403416   12840 start.go:364] duration metric: took 312.833µs to acquireMachinesLock for "newest-cni-626000"
	I1205 11:16:27.403580   12840 start.go:93] Provisioning new machine with config: &{Name:newest-cni-626000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:newest-cni-626000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1205 11:16:27.403898   12840 start.go:125] createHost starting for "" (driver="qemu2")
	I1205 11:16:27.408842   12840 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 11:16:27.458976   12840 start.go:159] libmachine.API.Create for "newest-cni-626000" (driver="qemu2")
	I1205 11:16:27.459032   12840 client.go:168] LocalClient.Create starting
	I1205 11:16:27.459158   12840 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/ca.pem
	I1205 11:16:27.459222   12840 main.go:141] libmachine: Decoding PEM data...
	I1205 11:16:27.459241   12840 main.go:141] libmachine: Parsing certificate...
	I1205 11:16:27.459322   12840 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20052-8600/.minikube/certs/cert.pem
	I1205 11:16:27.459356   12840 main.go:141] libmachine: Decoding PEM data...
	I1205 11:16:27.459371   12840 main.go:141] libmachine: Parsing certificate...
	I1205 11:16:27.459986   12840 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1205 11:16:27.633052   12840 main.go:141] libmachine: Creating SSH key...
	I1205 11:16:27.735559   12840 main.go:141] libmachine: Creating Disk image...
	I1205 11:16:27.735566   12840 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1205 11:16:27.735758   12840 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/newest-cni-626000/disk.qcow2.raw /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/newest-cni-626000/disk.qcow2
	I1205 11:16:27.745713   12840 main.go:141] libmachine: STDOUT: 
	I1205 11:16:27.745731   12840 main.go:141] libmachine: STDERR: 
	I1205 11:16:27.745794   12840 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/newest-cni-626000/disk.qcow2 +20000M
	I1205 11:16:27.754152   12840 main.go:141] libmachine: STDOUT: Image resized.
	
	I1205 11:16:27.754167   12840 main.go:141] libmachine: STDERR: 
	I1205 11:16:27.754178   12840 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/newest-cni-626000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/newest-cni-626000/disk.qcow2
	I1205 11:16:27.754182   12840 main.go:141] libmachine: Starting QEMU VM...
	I1205 11:16:27.754192   12840 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:16:27.754224   12840 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/newest-cni-626000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/newest-cni-626000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/newest-cni-626000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:7e:e6:85:4d:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/newest-cni-626000/disk.qcow2
	I1205 11:16:27.756027   12840 main.go:141] libmachine: STDOUT: 
	I1205 11:16:27.756047   12840 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:16:27.756058   12840 client.go:171] duration metric: took 297.02ms to LocalClient.Create
	I1205 11:16:29.758299   12840 start.go:128] duration metric: took 2.35434625s to createHost
	I1205 11:16:29.758621   12840 start.go:83] releasing machines lock for "newest-cni-626000", held for 2.355127125s
	W1205 11:16:29.758978   12840 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-626000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-626000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:16:29.773643   12840 out.go:201] 
	W1205 11:16:29.777671   12840 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:16:29.777712   12840 out.go:270] * 
	* 
	W1205 11:16:29.780672   12840 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:16:29.795622   12840 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-626000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-626000 -n newest-cni-626000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-626000 -n newest-cni-626000: exit status 7 (74.370833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-626000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-701000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-701000 create -f testdata/busybox.yaml: exit status 1 (28.714333ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-701000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-701000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-701000 -n default-k8s-diff-port-701000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-701000 -n default-k8s-diff-port-701000: exit status 7 (33.063125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-701000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-701000 -n default-k8s-diff-port-701000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-701000 -n default-k8s-diff-port-701000: exit status 7 (32.770458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-701000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-701000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-701000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-701000 describe deploy/metrics-server -n kube-system: exit status 1 (27.477792ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-701000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-701000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-701000 -n default-k8s-diff-port-701000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-701000 -n default-k8s-diff-port-701000: exit status 7 (33.332708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-701000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-701000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-701000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (5.192841708s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-701000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-701000" primary control-plane node in "default-k8s-diff-port-701000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-701000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-701000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:16:30.093805   12900 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:16:30.093947   12900 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:16:30.093951   12900 out.go:358] Setting ErrFile to fd 2...
	I1205 11:16:30.093953   12900 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:16:30.094102   12900 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:16:30.095160   12900 out.go:352] Setting JSON to false
	I1205 11:16:30.113517   12900 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6362,"bootTime":1733419828,"procs":542,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 11:16:30.113588   12900 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:16:30.117466   12900 out.go:177] * [default-k8s-diff-port-701000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:16:30.124330   12900 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 11:16:30.124404   12900 notify.go:220] Checking for updates...
	I1205 11:16:30.133313   12900 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 11:16:30.137338   12900 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:16:30.140369   12900 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:16:30.143296   12900 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	I1205 11:16:30.146314   12900 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:16:30.149653   12900 config.go:182] Loaded profile config "default-k8s-diff-port-701000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:16:30.149917   12900 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:16:30.153186   12900 out.go:177] * Using the qemu2 driver based on existing profile
	I1205 11:16:30.160392   12900 start.go:297] selected driver: qemu2
	I1205 11:16:30.160398   12900 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-701000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-701000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:16:30.160446   12900 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:16:30.163045   12900 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 11:16:30.163073   12900 cni.go:84] Creating CNI manager for ""
	I1205 11:16:30.163103   12900 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:16:30.163128   12900 start.go:340] cluster config:
	{Name:default-k8s-diff-port-701000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-701000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:16:30.167685   12900 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:16:30.176297   12900 out.go:177] * Starting "default-k8s-diff-port-701000" primary control-plane node in "default-k8s-diff-port-701000" cluster
	I1205 11:16:30.179325   12900 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:16:30.179343   12900 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:16:30.179357   12900 cache.go:56] Caching tarball of preloaded images
	I1205 11:16:30.179442   12900 preload.go:172] Found /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:16:30.179448   12900 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:16:30.179520   12900 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/default-k8s-diff-port-701000/config.json ...
	I1205 11:16:30.180097   12900 start.go:360] acquireMachinesLock for default-k8s-diff-port-701000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:16:30.180128   12900 start.go:364] duration metric: took 25.083µs to acquireMachinesLock for "default-k8s-diff-port-701000"
	I1205 11:16:30.180137   12900 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:16:30.180143   12900 fix.go:54] fixHost starting: 
	I1205 11:16:30.180262   12900 fix.go:112] recreateIfNeeded on default-k8s-diff-port-701000: state=Stopped err=<nil>
	W1205 11:16:30.180271   12900 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:16:30.184342   12900 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-701000" ...
	I1205 11:16:30.190337   12900 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:16:30.190380   12900 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/default-k8s-diff-port-701000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/default-k8s-diff-port-701000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/default-k8s-diff-port-701000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:8a:ff:5c:2b:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/default-k8s-diff-port-701000/disk.qcow2
	I1205 11:16:30.192623   12900 main.go:141] libmachine: STDOUT: 
	I1205 11:16:30.192641   12900 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:16:30.192669   12900 fix.go:56] duration metric: took 12.525583ms for fixHost
	I1205 11:16:30.192674   12900 start.go:83] releasing machines lock for "default-k8s-diff-port-701000", held for 12.541083ms
	W1205 11:16:30.192679   12900 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:16:30.192728   12900 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:16:30.192733   12900 start.go:729] Will try again in 5 seconds ...
	I1205 11:16:35.195011   12900 start.go:360] acquireMachinesLock for default-k8s-diff-port-701000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:16:35.195416   12900 start.go:364] duration metric: took 306.583µs to acquireMachinesLock for "default-k8s-diff-port-701000"
	I1205 11:16:35.195535   12900 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:16:35.195556   12900 fix.go:54] fixHost starting: 
	I1205 11:16:35.196265   12900 fix.go:112] recreateIfNeeded on default-k8s-diff-port-701000: state=Stopped err=<nil>
	W1205 11:16:35.196293   12900 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:16:35.204858   12900 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-701000" ...
	I1205 11:16:35.207978   12900 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:16:35.208229   12900 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/default-k8s-diff-port-701000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/default-k8s-diff-port-701000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/default-k8s-diff-port-701000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:8a:ff:5c:2b:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/default-k8s-diff-port-701000/disk.qcow2
	I1205 11:16:35.218612   12900 main.go:141] libmachine: STDOUT: 
	I1205 11:16:35.218671   12900 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:16:35.218742   12900 fix.go:56] duration metric: took 23.187917ms for fixHost
	I1205 11:16:35.218766   12900 start.go:83] releasing machines lock for "default-k8s-diff-port-701000", held for 23.327083ms
	W1205 11:16:35.218942   12900 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-701000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-701000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:16:35.225949   12900 out.go:201] 
	W1205 11:16:35.229999   12900 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:16:35.230052   12900 out.go:270] * 
	* 
	W1205 11:16:35.232553   12900 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:16:35.240952   12900 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-701000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-701000 -n default-k8s-diff-port-701000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-701000 -n default-k8s-diff-port-701000: exit status 7 (71.968708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-701000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-626000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-626000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (5.192697042s)

                                                
                                                
-- stdout --
	* [newest-cni-626000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-626000" primary control-plane node in "newest-cni-626000" cluster
	* Restarting existing qemu2 VM for "newest-cni-626000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-626000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:16:33.286045   12925 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:16:33.286211   12925 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:16:33.286214   12925 out.go:358] Setting ErrFile to fd 2...
	I1205 11:16:33.286217   12925 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:16:33.286338   12925 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:16:33.287376   12925 out.go:352] Setting JSON to false
	I1205 11:16:33.304948   12925 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6365,"bootTime":1733419828,"procs":542,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 11:16:33.305028   12925 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 11:16:33.309278   12925 out.go:177] * [newest-cni-626000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 11:16:33.316323   12925 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 11:16:33.316331   12925 notify.go:220] Checking for updates...
	I1205 11:16:33.323237   12925 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 11:16:33.326163   12925 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 11:16:33.330220   12925 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 11:16:33.333224   12925 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	I1205 11:16:33.336242   12925 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 11:16:33.339568   12925 config.go:182] Loaded profile config "newest-cni-626000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:16:33.339847   12925 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 11:16:33.343145   12925 out.go:177] * Using the qemu2 driver based on existing profile
	I1205 11:16:33.350219   12925 start.go:297] selected driver: qemu2
	I1205 11:16:33.350226   12925 start.go:901] validating driver "qemu2" against &{Name:newest-cni-626000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:newest-cni-626000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:16:33.350304   12925 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 11:16:33.352856   12925 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1205 11:16:33.352885   12925 cni.go:84] Creating CNI manager for ""
	I1205 11:16:33.352904   12925 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 11:16:33.352935   12925 start.go:340] cluster config:
	{Name:newest-cni-626000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-626000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 11:16:33.357288   12925 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 11:16:33.365256   12925 out.go:177] * Starting "newest-cni-626000" primary control-plane node in "newest-cni-626000" cluster
	I1205 11:16:33.368226   12925 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 11:16:33.368239   12925 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 11:16:33.368248   12925 cache.go:56] Caching tarball of preloaded images
	I1205 11:16:33.368295   12925 preload.go:172] Found /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1205 11:16:33.368300   12925 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1205 11:16:33.368346   12925 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/newest-cni-626000/config.json ...
	I1205 11:16:33.368856   12925 start.go:360] acquireMachinesLock for newest-cni-626000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:16:33.368886   12925 start.go:364] duration metric: took 24.166µs to acquireMachinesLock for "newest-cni-626000"
	I1205 11:16:33.368894   12925 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:16:33.368899   12925 fix.go:54] fixHost starting: 
	I1205 11:16:33.369018   12925 fix.go:112] recreateIfNeeded on newest-cni-626000: state=Stopped err=<nil>
	W1205 11:16:33.369027   12925 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:16:33.373249   12925 out.go:177] * Restarting existing qemu2 VM for "newest-cni-626000" ...
	I1205 11:16:33.381235   12925 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:16:33.381282   12925 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/newest-cni-626000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/newest-cni-626000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/newest-cni-626000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:7e:e6:85:4d:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/newest-cni-626000/disk.qcow2
	I1205 11:16:33.383462   12925 main.go:141] libmachine: STDOUT: 
	I1205 11:16:33.383479   12925 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:16:33.383509   12925 fix.go:56] duration metric: took 14.60825ms for fixHost
	I1205 11:16:33.383514   12925 start.go:83] releasing machines lock for "newest-cni-626000", held for 14.623958ms
	W1205 11:16:33.383520   12925 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:16:33.383557   12925 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:16:33.383561   12925 start.go:729] Will try again in 5 seconds ...
	I1205 11:16:38.385902   12925 start.go:360] acquireMachinesLock for newest-cni-626000: {Name:mkc899779344f7674c1e0d059fa3f0098e44699a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 11:16:38.386461   12925 start.go:364] duration metric: took 437.666µs to acquireMachinesLock for "newest-cni-626000"
	I1205 11:16:38.386657   12925 start.go:96] Skipping create...Using existing machine configuration
	I1205 11:16:38.386678   12925 fix.go:54] fixHost starting: 
	I1205 11:16:38.387434   12925 fix.go:112] recreateIfNeeded on newest-cni-626000: state=Stopped err=<nil>
	W1205 11:16:38.387461   12925 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 11:16:38.391088   12925 out.go:177] * Restarting existing qemu2 VM for "newest-cni-626000" ...
	I1205 11:16:38.399883   12925 qemu.go:418] Using hvf for hardware acceleration
	I1205 11:16:38.400108   12925 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/newest-cni-626000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20052-8600/.minikube/machines/newest-cni-626000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/newest-cni-626000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:7e:e6:85:4d:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20052-8600/.minikube/machines/newest-cni-626000/disk.qcow2
	I1205 11:16:38.410846   12925 main.go:141] libmachine: STDOUT: 
	I1205 11:16:38.410920   12925 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1205 11:16:38.411024   12925 fix.go:56] duration metric: took 24.346833ms for fixHost
	I1205 11:16:38.411047   12925 start.go:83] releasing machines lock for "newest-cni-626000", held for 24.564ms
	W1205 11:16:38.411233   12925 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-626000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-626000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1205 11:16:38.418877   12925 out.go:201] 
	W1205 11:16:38.421919   12925 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1205 11:16:38.421946   12925 out.go:270] * 
	* 
	W1205 11:16:38.424422   12925 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 11:16:38.437997   12925 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-626000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-626000 -n newest-cni-626000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-626000 -n newest-cni-626000: exit status 7 (73.67875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-626000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-701000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-701000 -n default-k8s-diff-port-701000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-701000 -n default-k8s-diff-port-701000: exit status 7 (35.810291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-701000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-701000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-701000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-701000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.260708ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-701000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-701000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-701000 -n default-k8s-diff-port-701000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-701000 -n default-k8s-diff-port-701000: exit status 7 (33.127708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-701000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-701000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-701000 -n default-k8s-diff-port-701000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-701000 -n default-k8s-diff-port-701000: exit status 7 (32.771375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-701000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-701000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-701000 --alsologtostderr -v=1: exit status 83 (46.874375ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-701000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-701000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:16:35.532999   12944 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:16:35.533214   12944 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:16:35.533217   12944 out.go:358] Setting ErrFile to fd 2...
	I1205 11:16:35.533219   12944 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:16:35.533365   12944 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:16:35.533591   12944 out.go:352] Setting JSON to false
	I1205 11:16:35.533598   12944 mustload.go:65] Loading cluster: default-k8s-diff-port-701000
	I1205 11:16:35.533826   12944 config.go:182] Loaded profile config "default-k8s-diff-port-701000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:16:35.538920   12944 out.go:177] * The control-plane node default-k8s-diff-port-701000 host is not running: state=Stopped
	I1205 11:16:35.542914   12944 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-701000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-701000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-701000 -n default-k8s-diff-port-701000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-701000 -n default-k8s-diff-port-701000: exit status 7 (32.9305ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-701000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-701000 -n default-k8s-diff-port-701000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-701000 -n default-k8s-diff-port-701000: exit status 7 (32.759708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-701000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-626000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-626000 -n newest-cni-626000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-626000 -n newest-cni-626000: exit status 7 (34.075709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-626000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-626000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-626000 --alsologtostderr -v=1: exit status 83 (46.490334ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-626000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-626000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 11:16:38.635518   12971 out.go:345] Setting OutFile to fd 1 ...
	I1205 11:16:38.635709   12971 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:16:38.635712   12971 out.go:358] Setting ErrFile to fd 2...
	I1205 11:16:38.635715   12971 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 11:16:38.635847   12971 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 11:16:38.636087   12971 out.go:352] Setting JSON to false
	I1205 11:16:38.636094   12971 mustload.go:65] Loading cluster: newest-cni-626000
	I1205 11:16:38.636303   12971 config.go:182] Loaded profile config "newest-cni-626000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 11:16:38.640350   12971 out.go:177] * The control-plane node newest-cni-626000 host is not running: state=Stopped
	I1205 11:16:38.644396   12971 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-626000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-626000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-626000 -n newest-cni-626000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-626000 -n newest-cni-626000: exit status 7 (34.644875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-626000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-626000 -n newest-cni-626000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-626000 -n newest-cni-626000: exit status 7 (34.205ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-626000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.12s)

                                                
                                    

Test pass (80/258)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.11
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.11
12 TestDownloadOnly/v1.31.2/json-events 9.51
13 TestDownloadOnly/v1.31.2/preload-exists 0
16 TestDownloadOnly/v1.31.2/kubectl 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.08
18 TestDownloadOnly/v1.31.2/DeleteAll 0.12
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.11
21 TestBinaryMirror 0.31
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
35 TestHyperKitDriverInstallOrUpdate 10.44
39 TestErrorSpam/start 0.41
40 TestErrorSpam/status 0.11
41 TestErrorSpam/pause 0.14
42 TestErrorSpam/unpause 0.13
43 TestErrorSpam/stop 9.66
46 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/CacheCmd/cache/add_remote 1.97
55 TestFunctional/serial/CacheCmd/cache/add_local 1.09
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
57 TestFunctional/serial/CacheCmd/cache/list 0.04
60 TestFunctional/serial/CacheCmd/cache/delete 0.08
69 TestFunctional/parallel/ConfigCmd 0.24
71 TestFunctional/parallel/DryRun 0.29
72 TestFunctional/parallel/InternationalLanguage 0.12
78 TestFunctional/parallel/AddonsCmd 0.1
93 TestFunctional/parallel/License 0.27
94 TestFunctional/parallel/Version/short 0.04
101 TestFunctional/parallel/ImageCommands/Setup 1.91
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
122 TestFunctional/parallel/ImageCommands/ImageRemove 0.08
124 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.08
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.1
126 TestFunctional/parallel/ProfileCmd/profile_list 0.09
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.09
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.17
135 TestFunctional/delete_echo-server_images 0.07
136 TestFunctional/delete_my-image_image 0.02
137 TestFunctional/delete_minikube_cached_images 0.02
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 3.39
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.22
193 TestMainNoArgs 0.04
240 TestStoppedBinaryUpgrade/Setup 1.07
252 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
256 TestNoKubernetes/serial/VerifyK8sNotRunning 0.05
257 TestNoKubernetes/serial/ProfileList 31.44
258 TestNoKubernetes/serial/Stop 3.34
260 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.05
270 TestStoppedBinaryUpgrade/MinikubeLogs 0.69
275 TestStartStop/group/old-k8s-version/serial/Stop 3.04
276 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
288 TestStartStop/group/no-preload/serial/Stop 3.57
291 TestStartStop/group/embed-certs/serial/Stop 3.96
292 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
294 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
310 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.08
311 TestStartStop/group/newest-cni/serial/DeployApp 0
312 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.07
313 TestStartStop/group/newest-cni/serial/Stop 3.17
314 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.11
316 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
322 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1205 10:50:15.123121    9136 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I1205 10:50:15.123666    9136 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-751000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-751000: exit status 85 (105.677084ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-751000 | jenkins | v1.34.0 | 05 Dec 24 10:49 PST |          |
	|         | -p download-only-751000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 10:49:55
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.2 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 10:49:55.395181    9137 out.go:345] Setting OutFile to fd 1 ...
	I1205 10:49:55.395343    9137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:49:55.395347    9137 out.go:358] Setting ErrFile to fd 2...
	I1205 10:49:55.395350    9137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:49:55.395467    9137 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	W1205 10:49:55.395576    9137 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/20052-8600/.minikube/config/config.json: open /Users/jenkins/minikube-integration/20052-8600/.minikube/config/config.json: no such file or directory
	I1205 10:49:55.396912    9137 out.go:352] Setting JSON to true
	I1205 10:49:55.414870    9137 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4767,"bootTime":1733419828,"procs":548,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 10:49:55.414946    9137 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 10:49:55.420899    9137 out.go:97] [download-only-751000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 10:49:55.421056    9137 notify.go:220] Checking for updates...
	W1205 10:49:55.421095    9137 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball: no such file or directory
	I1205 10:49:55.424769    9137 out.go:169] MINIKUBE_LOCATION=20052
	I1205 10:49:55.427802    9137 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 10:49:55.432843    9137 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 10:49:55.436797    9137 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 10:49:55.439823    9137 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	W1205 10:49:55.445735    9137 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1205 10:49:55.445987    9137 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 10:49:55.449755    9137 out.go:97] Using the qemu2 driver based on user configuration
	I1205 10:49:55.449774    9137 start.go:297] selected driver: qemu2
	I1205 10:49:55.449787    9137 start.go:901] validating driver "qemu2" against <nil>
	I1205 10:49:55.449860    9137 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 10:49:55.452804    9137 out.go:169] Automatically selected the socket_vmnet network
	I1205 10:49:55.458216    9137 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1205 10:49:55.458322    9137 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 10:49:55.458369    9137 cni.go:84] Creating CNI manager for ""
	I1205 10:49:55.458401    9137 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1205 10:49:55.458461    9137 start.go:340] cluster config:
	{Name:download-only-751000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-751000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 10:49:55.462970    9137 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 10:49:55.467805    9137 out.go:97] Downloading VM boot image ...
	I1205 10:49:55.467826    9137 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso
	I1205 10:50:04.196206    9137 out.go:97] Starting "download-only-751000" primary control-plane node in "download-only-751000" cluster
	I1205 10:50:04.196226    9137 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1205 10:50:04.266679    9137 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1205 10:50:04.266703    9137 cache.go:56] Caching tarball of preloaded images
	I1205 10:50:04.266918    9137 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1205 10:50:04.273173    9137 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1205 10:50:04.273182    9137 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1205 10:50:04.365810    9137 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1205 10:50:13.759386    9137 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1205 10:50:13.759553    9137 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1205 10:50:14.454153    9137 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1205 10:50:14.454348    9137 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/download-only-751000/config.json ...
	I1205 10:50:14.454364    9137 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20052-8600/.minikube/profiles/download-only-751000/config.json: {Name:mk74e11fe0fc9351120f8578bdc0f833b5da9df4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 10:50:14.454615    9137 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1205 10:50:14.454873    9137 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1205 10:50:15.055057    9137 out.go:193] 
	W1205 10:50:15.064136    9137 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/20052-8600/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x105614320 0x105614320 0x105614320 0x105614320 0x105614320 0x105614320 0x105614320] Decompressors:map[bz2:0x14000491380 gz:0x14000491388 tar:0x14000491330 tar.bz2:0x14000491340 tar.gz:0x14000491350 tar.xz:0x14000491360 tar.zst:0x14000491370 tbz2:0x14000491340 tgz:0x14000491350 txz:0x14000491360 tzst:0x14000491370 xz:0x14000491390 zip:0x140004913a0 zst:0x14000491398] Getters:map[file:0x14000790c60 http:0x14000d16140 https:0x14000d16190] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1205 10:50:15.064163    9137 out_reason.go:110] 
	W1205 10:50:15.073069    9137 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 10:50:15.077028    9137 out.go:193] 
	
	
	* The control-plane node download-only-751000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-751000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-751000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (9.51s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-386000 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-386000 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=docker --driver=qemu2 : (9.512223792s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (9.51s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1205 10:50:25.021830    9136 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
I1205 10:50:25.021885    9136 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
--- PASS: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-386000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-386000: exit status 85 (83.115667ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-751000 | jenkins | v1.34.0 | 05 Dec 24 10:49 PST |                     |
	|         | -p download-only-751000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 05 Dec 24 10:50 PST | 05 Dec 24 10:50 PST |
	| delete  | -p download-only-751000        | download-only-751000 | jenkins | v1.34.0 | 05 Dec 24 10:50 PST | 05 Dec 24 10:50 PST |
	| start   | -o=json --download-only        | download-only-386000 | jenkins | v1.34.0 | 05 Dec 24 10:50 PST |                     |
	|         | -p download-only-386000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 10:50:15
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.2 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 10:50:15.541829    9161 out.go:345] Setting OutFile to fd 1 ...
	I1205 10:50:15.541985    9161 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:50:15.541988    9161 out.go:358] Setting ErrFile to fd 2...
	I1205 10:50:15.541991    9161 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:50:15.542116    9161 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 10:50:15.543295    9161 out.go:352] Setting JSON to true
	I1205 10:50:15.561001    9161 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4787,"bootTime":1733419828,"procs":538,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 10:50:15.561082    9161 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 10:50:15.566402    9161 out.go:97] [download-only-386000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 10:50:15.566490    9161 notify.go:220] Checking for updates...
	I1205 10:50:15.570373    9161 out.go:169] MINIKUBE_LOCATION=20052
	I1205 10:50:15.573387    9161 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 10:50:15.576430    9161 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 10:50:15.580386    9161 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 10:50:15.583451    9161 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	W1205 10:50:15.589338    9161 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1205 10:50:15.589574    9161 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 10:50:15.592410    9161 out.go:97] Using the qemu2 driver based on user configuration
	I1205 10:50:15.592420    9161 start.go:297] selected driver: qemu2
	I1205 10:50:15.592424    9161 start.go:901] validating driver "qemu2" against <nil>
	I1205 10:50:15.592475    9161 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 10:50:15.595424    9161 out.go:169] Automatically selected the socket_vmnet network
	I1205 10:50:15.601777    9161 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1205 10:50:15.601863    9161 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 10:50:15.601884    9161 cni.go:84] Creating CNI manager for ""
	I1205 10:50:15.601916    9161 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1205 10:50:15.601921    9161 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 10:50:15.601973    9161 start.go:340] cluster config:
	{Name:download-only-386000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-386000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 10:50:15.606141    9161 iso.go:125] acquiring lock: {Name:mkd72272cf40e6ca5e7e6a5a9617ae61287c310b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 10:50:15.609361    9161 out.go:97] Starting "download-only-386000" primary control-plane node in "download-only-386000" cluster
	I1205 10:50:15.609368    9161 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 10:50:15.670162    9161 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1205 10:50:15.670180    9161 cache.go:56] Caching tarball of preloaded images
	I1205 10:50:15.670369    9161 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1205 10:50:15.674499    9161 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1205 10:50:15.674507    9161 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 ...
	I1205 10:50:15.754615    9161 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4?checksum=md5:5f3d7369b12f47138aa2863bb7bda916 -> /Users/jenkins/minikube-integration/20052-8600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-386000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-386000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-386000
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.31s)

                                                
                                                
=== RUN   TestBinaryMirror
I1205 10:50:25.554806    9136 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/darwin/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-193000 --alsologtostderr --binary-mirror http://127.0.0.1:51554 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-193000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-193000
--- PASS: TestBinaryMirror (0.31s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-904000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-904000: exit status 85 (62.695292ms)

                                                
                                                
-- stdout --
	* Profile "addons-904000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-904000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-904000
addons_test.go:950: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-904000: exit status 85 (59.8975ms)

                                                
                                                
-- stdout --
	* Profile "addons-904000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-904000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.44s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I1205 11:01:58.862904    9136 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1205 11:01:58.863077    9136 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
W1205 11:02:00.808312    9136 install.go:62] docker-machine-driver-hyperkit: exit status 1
W1205 11:02:00.808549    9136 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1205 11:02:00.808584    9136 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2820100655/001/docker-machine-driver-hyperkit
I1205 11:02:01.342976    9136 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2820100655/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x1076696e0 0x1076696e0 0x1076696e0 0x1076696e0 0x1076696e0 0x1076696e0 0x1076696e0] Decompressors:map[bz2:0x1400000ef78 gz:0x1400000f070 tar:0x1400000f020 tar.bz2:0x1400000f030 tar.gz:0x1400000f040 tar.xz:0x1400000f050 tar.zst:0x1400000f060 tbz2:0x1400000f030 tgz:0x1400000f040 txz:0x1400000f050 tzst:0x1400000f060 xz:0x1400000f078 zip:0x1400000f080 zst:0x1400000f090] Getters:map[file:0x140015faa30 http:0x14000aa7270 https:0x14000aa72c0] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1205 11:02:01.343096    9136 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2820100655/001/docker-machine-driver-hyperkit
--- PASS: TestHyperKitDriverInstallOrUpdate (10.44s)

                                                
                                    
x
+
TestErrorSpam/start (0.41s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-846000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-846000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-846000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 start --dry-run
--- PASS: TestErrorSpam/start (0.41s)

                                                
                                    
x
+
TestErrorSpam/status (0.11s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-846000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-846000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 status: exit status 7 (35.821959ms)

                                                
                                                
-- stdout --
	nospam-846000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-846000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-846000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-846000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 status: exit status 7 (34.565583ms)

                                                
                                                
-- stdout --
	nospam-846000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-846000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-846000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-846000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 status: exit status 7 (34.4615ms)

                                                
                                                
-- stdout --
	nospam-846000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-846000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.11s)

                                                
                                    
x
+
TestErrorSpam/pause (0.14s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-846000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-846000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 pause: exit status 83 (44.469875ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-846000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-846000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-846000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-846000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-846000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 pause: exit status 83 (44.913875ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-846000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-846000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-846000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-846000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-846000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 pause: exit status 83 (45.757ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-846000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-846000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-846000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.14s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-846000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-846000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 unpause: exit status 83 (43.253417ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-846000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-846000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-846000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-846000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-846000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 unpause: exit status 83 (43.457833ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-846000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-846000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-846000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-846000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-846000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 unpause: exit status 83 (44.737333ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-846000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-846000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-846000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.13s)

                                                
                                    
x
+
TestErrorSpam/stop (9.66s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-846000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-846000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 stop: (1.930509792s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-846000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-846000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 stop: (4.017598584s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-846000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-846000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-846000 stop: (3.705350042s)
--- PASS: TestErrorSpam/stop (9.66s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/20052-8600/.minikube/files/etc/test/nested/copy/9136/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.97s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.97s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-606000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local2055578222/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 cache add minikube-local-cache-test:functional-606000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 cache delete minikube-local-cache-test:functional-606000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-606000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 config get cpus: exit status 14 (34.436958ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 config get cpus: exit status 14 (39.369958ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-606000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-606000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (167.822125ms)

                                                
                                                
-- stdout --
	* [functional-606000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 10:52:03.545207    9724 out.go:345] Setting OutFile to fd 1 ...
	I1205 10:52:03.545408    9724 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:52:03.545412    9724 out.go:358] Setting ErrFile to fd 2...
	I1205 10:52:03.545415    9724 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:52:03.545587    9724 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 10:52:03.547042    9724 out.go:352] Setting JSON to false
	I1205 10:52:03.567635    9724 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4895,"bootTime":1733419828,"procs":562,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 10:52:03.567713    9724 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 10:52:03.573109    9724 out.go:177] * [functional-606000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1205 10:52:03.581013    9724 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 10:52:03.581061    9724 notify.go:220] Checking for updates...
	I1205 10:52:03.586483    9724 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 10:52:03.589966    9724 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 10:52:03.592990    9724 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 10:52:03.595988    9724 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	I1205 10:52:03.599018    9724 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 10:52:03.602254    9724 config.go:182] Loaded profile config "functional-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 10:52:03.602541    9724 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 10:52:03.605952    9724 out.go:177] * Using the qemu2 driver based on existing profile
	I1205 10:52:03.612947    9724 start.go:297] selected driver: qemu2
	I1205 10:52:03.612958    9724 start.go:901] validating driver "qemu2" against &{Name:functional-606000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-606000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 10:52:03.613031    9724 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 10:52:03.618959    9724 out.go:201] 
	W1205 10:52:03.622931    9724 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1205 10:52:03.627004    9724 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-606000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-606000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-606000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (114.867333ms)

                                                
                                                
-- stdout --
	* [functional-606000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 10:52:03.782021    9735 out.go:345] Setting OutFile to fd 1 ...
	I1205 10:52:03.782178    9735 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:52:03.782181    9735 out.go:358] Setting ErrFile to fd 2...
	I1205 10:52:03.782183    9735 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 10:52:03.782311    9735 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20052-8600/.minikube/bin
	I1205 10:52:03.783821    9735 out.go:352] Setting JSON to false
	I1205 10:52:03.802075    9735 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4895,"bootTime":1733419828,"procs":562,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1205 10:52:03.802168    9735 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1205 10:52:03.810945    9735 out.go:177] * [functional-606000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	I1205 10:52:03.814022    9735 out.go:177]   - MINIKUBE_LOCATION=20052
	I1205 10:52:03.814078    9735 notify.go:220] Checking for updates...
	I1205 10:52:03.819927    9735 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	I1205 10:52:03.822938    9735 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1205 10:52:03.824366    9735 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 10:52:03.827951    9735 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	I1205 10:52:03.830946    9735 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 10:52:03.834369    9735 config.go:182] Loaded profile config "functional-606000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1205 10:52:03.834649    9735 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 10:52:03.838934    9735 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I1205 10:52:03.846005    9735 start.go:297] selected driver: qemu2
	I1205 10:52:03.846011    9735 start.go:901] validating driver "qemu2" against &{Name:functional-606000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-606000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 10:52:03.846069    9735 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 10:52:03.851961    9735 out.go:201] 
	W1205 10:52:03.856016    9735 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1205 10:52:03.859926    9735 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.883456417s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-606000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-606000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 image rm kicbase/echo-server:functional-606000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-606000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 image save --daemon kicbase/echo-server:functional-606000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-606000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "50.778042ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "36.848583ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "51.83025ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "37.786292ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.012116333s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-606000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.17s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-606000
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-606000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-606000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.39s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-726000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-726000 --output=json --user=testUser: (3.385214459s)
--- PASS: TestJSONOutput/stop/Command (3.39s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-321000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-321000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (101.792833ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9c323e9f-14d0-41cb-9ca1-1c65ce9ee0d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-321000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e48a962f-03f7-49f2-b59e-9d17a239fddb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20052"}}
	{"specversion":"1.0","id":"8b0fc5ee-eb45-4061-a4f6-bf096a86129d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig"}}
	{"specversion":"1.0","id":"0a001d51-85ef-4e53-b5de-19d379988d40","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"680940c9-8d6a-456a-b23d-a9b0a09a3f86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cb9648f0-21eb-4aba-976d-1e46bbf99363","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube"}}
	{"specversion":"1.0","id":"7931ef86-9acb-44fe-912a-e8ac5896a64f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"55b2e024-0b09-41c8-9d2b-68c86ec14ae3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-321000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-321000
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.07s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-589000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-589000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (106.922708ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-589000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20052
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20052-8600/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20052-8600/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-589000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-589000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (45.65325ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-589000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-589000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.736958417s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.705230042s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-589000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-589000: (3.340135917s)
--- PASS: TestNoKubernetes/serial/Stop (3.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-589000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-589000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (47.634166ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-589000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-589000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.69s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-616000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-811000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-811000 --alsologtostderr -v=3: (3.038503542s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-811000 -n old-k8s-version-811000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-811000 -n old-k8s-version-811000: exit status 7 (50.46ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-811000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-842000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-842000 --alsologtostderr -v=3: (3.573264833s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-089000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-089000 --alsologtostderr -v=3: (3.957355917s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-842000 -n no-preload-842000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-842000 -n no-preload-842000: exit status 7 (61.987583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-842000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-089000 -n embed-certs-089000: exit status 7 (60.53275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-089000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-701000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-701000 --alsologtostderr -v=3: (3.075504084s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-626000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-626000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-626000 --alsologtostderr -v=3: (3.171301667s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-701000 -n default-k8s-diff-port-701000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-701000 -n default-k8s-diff-port-701000: exit status 7 (39.398125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-701000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-626000 -n newest-cni-626000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-626000 -n newest-cni-626000: exit status 7 (61.284375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-626000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/258)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-606000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2667956517/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1733424688050189000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2667956517/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1733424688050189000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2667956517/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1733424688050189000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2667956517/001/test-1733424688050189000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (65.725375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
I1205 10:51:28.116440    9136 retry.go:31] will retry after 706.439139ms: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.531834ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
I1205 10:51:28.916804    9136 retry.go:31] will retry after 424.982759ms: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (97.51475ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
I1205 10:51:29.441634    9136 retry.go:31] will retry after 1.270280082s: exit status 83
I1205 10:51:30.675086    9136 retry.go:31] will retry after 10.011203458s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (93.475584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
I1205 10:51:30.807798    9136 retry.go:31] will retry after 1.841853579s: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.848042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
I1205 10:51:32.743851    9136 retry.go:31] will retry after 2.574774045s: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.854416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
I1205 10:51:35.410886    9136 retry.go:31] will retry after 4.067315429s: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.51275ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 ssh "sudo umount -f /mount-9p": exit status 83 (48.965792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-606000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-606000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2667956517/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (11.69s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (11.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-606000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port4005845056/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (68.679ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
I1205 10:51:39.810203    9136 retry.go:31] will retry after 356.855504ms: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.975667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
I1205 10:51:40.259427    9136 retry.go:31] will retry after 802.856175ms: exit status 83
I1205 10:51:40.688526    9136 retry.go:31] will retry after 8.311020116s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.493125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
I1205 10:51:41.155231    9136 retry.go:31] will retry after 1.110425997s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.234125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
I1205 10:51:42.359223    9136 retry.go:31] will retry after 1.27979245s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (92.479042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
I1205 10:51:43.733823    9136 retry.go:31] will retry after 2.220560618s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.282916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
I1205 10:51:46.047072    9136 retry.go:31] will retry after 4.915837732s: exit status 83
I1205 10:51:49.001776    9136 retry.go:31] will retry after 12.392671082s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (93.3555ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 ssh "sudo umount -f /mount-9p": exit status 83 (48.874708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-606000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-606000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port4005845056/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (11.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (12.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-606000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1402809280/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-606000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1402809280/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-606000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1402809280/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T" /mount1: exit status 83 (91.855792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
I1205 10:51:51.323177    9136 retry.go:31] will retry after 684.040824ms: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T" /mount1: exit status 83 (89.415084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
I1205 10:51:52.098923    9136 retry.go:31] will retry after 732.693888ms: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T" /mount1: exit status 83 (91.412583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
I1205 10:51:52.925353    9136 retry.go:31] will retry after 988.634568ms: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T" /mount1: exit status 83 (91.227625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
I1205 10:51:54.007498    9136 retry.go:31] will retry after 1.077246143s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T" /mount1: exit status 83 (91.283167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
I1205 10:51:55.178410    9136 retry.go:31] will retry after 2.061607791s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T" /mount1: exit status 83 (92.351958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
I1205 10:51:57.334705    9136 retry.go:31] will retry after 5.645165296s: exit status 83
I1205 10:52:01.396717    9136 retry.go:31] will retry after 22.514325036s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-606000 ssh "findmnt -T" /mount1: exit status 83 (90.334666ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-606000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-606000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-606000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1402809280/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-606000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1402809280/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-606000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1402809280/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (12.25s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-972000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-972000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-972000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-972000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-972000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-972000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-972000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-972000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-972000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-972000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-972000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-972000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-972000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-972000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-972000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-972000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-972000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-972000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-972000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-972000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-972000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-972000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-972000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-972000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-972000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-972000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-972000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-972000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-972000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-972000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-972000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-972000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-972000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-972000"

                                                
                                                
----------------------- debugLogs end: cilium-972000 [took: 2.384427833s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-972000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-972000
--- SKIP: TestNetworkPlugins/group/cilium (2.51s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-707000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-707000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                    
Copied to clipboard